Hp Serviceguard Extension For Sap Sgesap Users Guide Managing Version B.05.00
2015-03-28
: Hp Hp-Serviceguard-Extension-For-Sap-Sgesap-Users-Guide-669838 hp-serviceguard-extension-for-sap-sgesap-users-guide-669838 hp pdf
Open the PDF directly: View PDF .
Page Count: 142
Download | |
Open PDF In Browser | View PDF |
Managing Serviceguard Extension for SAP Version B.05.00 *T2803-90013* Printed in the US HP Part Number: T2803-90013 Published: March 2009 © Copyright 2000-2009 Hewlett-Packard Development Company, L.P Legal Notices Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright.Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. SAP™ and Netweaver™ are trademarks or registered trademarks of SAP AG. HP-UX® is a registered trademark of Hewlett-Packard Development Company, L.P. Java™ is a U.S. trademark of Sun Microsystems, Inc. Intel®, Itanium®, registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. Oracle® is a registered trademark of Oracle Corporation. EMC2™, Symmetrix™, and SRDF™ are trademarks of EMC Corporation. Table of Contents Printing History.............................................................................................9 About this Manual...................................................................................................................................9 Related Documentation..........................................................................................................................10 1 Designing SGeSAP Cluster Scenarios..........................................................11 General Concepts of SGeSAP.................................................................................................................11 Mutual Failover Scenarios Using the Two Package Concept........................................................................12 Robust Failover Using the One Package Concept.......................................................................................14 Follow-and-Push Clusters with Replicated Enqueue.....................................................................................15 Dedicated NFS Packages.......................................................................................................................17 Dialog Instance Clusters as Simple Tool for Adaptive Enterprises..................................................................17 Handling of Redundant Dialog Instances..................................................................................................18 Dedicated Failover Host.........................................................................................................................19 2 Planning the Storage Layout.......................................................................21 SAP Instance Storage Considerations.......................................................................................................21 Option 1: SGeSAP NFS Cluster..........................................................................................................22 Common Directories that are Kept Local.........................................................................................22 Directories that Reside on Shared Disks.........................................................................................23 Option 2: SGeSAP NFS Idle Standby Cluster.......................................................................................24 Common Directories that are Kept Local.........................................................................................24 Directories that Reside on Shared Disks.........................................................................................25 Option 3: SGeSAP CFS Cluster..........................................................................................................25 Common Directories that are Kept Local.........................................................................................26 Directories that Reside on CFS......................................................................................................26 Database Instance Storage Considerations...............................................................................................26 Oracle Single Instance RDBMS..........................................................................................................27 Oracle databases in SGeSAP NFS and NFS Idle Standby Clusters....................................................27 Oracle Real Application Clusters........................................................................................................28 MAXDB Storage Considerations........................................................................................................29 3 Step-by-Step Cluster Conversion..................................................................33 SAP Preparation....................................................................................................................................35 SAP Pre-Installation Considerations.....................................................................................................35 SAP Netweaver High Availability..................................................................................................35 Replicated Enqueue Conversion.........................................................................................................38 Splitting an ABAP Central Instance................................................................................................39 Creation of Replication Instance....................................................................................................42 HP-UX Configuration..............................................................................................................................45 Directory Structure Configuration........................................................................................................46 Cluster Filesystem Configuration....................................................................................................46 Non-CFS Directory Structure Conversion........................................................................................48 Cluster Node Synchronization............................................................................................................51 Cluster Node Configuration...............................................................................................................54 External Application Server Host Configuration....................................................................................56 Modular Package Configuration..............................................................................................................57 Legacy Package Configuration................................................................................................................65 Serviceguard Configuration...............................................................................................................65 SGeSAP Configuration.....................................................................................................................69 Specification of the Packaged SAP Components..............................................................................69 Configuration of Application Server Handling................................................................................72 Optional Parameters and Customizable Functions...........................................................................76 Table of Contents 3 Global Defaults...............................................................................................................................78 HA NFS Toolkit Configuration.................................................................................................................80 Auto FS Configuration............................................................................................................................81 Database Configuration.........................................................................................................................83 Additional Steps for Oracle...............................................................................................................84 Additional Steps for MAXDB.............................................................................................................86 SAP Application Server Configuration......................................................................................................87 SAP ABAP Engine specific configuration steps......................................................................................88 SAP J2EE Engine specific installation steps..........................................................................................90 4 SAP Supply Chain Management................................................................93 More About Hot Standby.......................................................................................................................94 Planning the Volume Manager Setup.......................................................................................................95 Option 1: Simple Clusters with Separated Packages..............................................................................95 Option 2: Non-MAXDB Environments.................................................................................................96 Option 3: Full Flexibility....................................................................................................................96 Option 4: Hot Standby liveCache......................................................................................................97 MAXDB Storage Considerations..............................................................................................................97 HP-UX Setup for Options 1, 2 and 3........................................................................................................99 Cluster Node Synchronization............................................................................................................99 Cluster Node Configuration.............................................................................................................100 HP-UX Setup for Option 4....................................................................................................................101 SGeSAP Modular Package Configuration...............................................................................................102 SGeSAP Legacy Package Configuration.................................................................................................104 Livecache Service Monitoring................................................................................................................106 APO Setup Changes...........................................................................................................................107 General Serviceguard Setup Changes...................................................................................................109 5 SAP Master Data Management (MDM).....................................................111 Master Data Management - Overview...................................................................................................111 Master Data Management User Interface Components........................................................................111 MDM Server Components...............................................................................................................112 SAP Netweaver XI components........................................................................................................112 Installation and Configuration Considerations.........................................................................................113 Prerequisites..................................................................................................................................113 The MDM SGeSAP File System Layout...............................................................................................113 Single or Multiple MDM Serviceguard Package Configurations............................................................114 Single MDM Serviceguard Package (ONE)..................................................................................114 Multiple MDM Serviceguard packages (FOUR+ONE)...................................................................114 Creating an initial Serviceguard package for the MDB Component.................................................119 6 SGeSAP Cluster Administration.................................................................137 Change Management.........................................................................................................................137 System Level Changes.....................................................................................................................137 SAP Software Changes...................................................................................................................139 Upgrading SAP Software.....................................................................................................................140 Mixed Clusters....................................................................................................................................140 4 Table of Contents List of Figures 1-1 1-2 1-3 1-4 1-5 3-1 4-1 4-2 4-3 5-1 6-1 Two-Package Failover with Mutual Backup Scenario.............................................................................14 One-Package Failover Scenario........................................................................................................15 Replicated Enqueue Clustering for ABAP and JAVA Instances................................................................16 Failover Node with Application Server package..................................................................................18 Replicated Enqueue Clustering for ABAP and JAVA Instances................................................................20 sapcpe Mechanism for Executables...................................................................................................50 Hot Standby liveCache....................................................................................................................94 Hot Standby System Configuration Wizard Screens...........................................................................102 Example HA SCM Layout...............................................................................................................108 MDM Graphical Overview.............................................................................................................111 SGeSAP cluster displayed in the HP System Management Homepage..................................................138 5 6 List of Tables 1 Editions and Releases............................................................................................................................9 2 Abbreviations.....................................................................................................................................10 1-1 Mapping the SGeSAP legacy package types to SGeSAP modules and different SAP naming conventions....12 2-1 Option descriptions.........................................................................................................................21 2-2 Instance Specific Volume Groups for exclusive activation with a package................................................23 2-3 System and Environment Specific Volume Groups................................................................................24 2-4 File systems for the SGeSAP package in NFS Idle Standby Clusters........................................................25 2-5 File System Layout for SGeSAP CFS Clusters........................................................................................26 2-6 Availability of SGeSAP Storage Layout Options for Different Database RDBMS........................................27 2-7 NLS Files - Default Location...............................................................................................................27 2-8 File System Layout for NFS-based Oracle Clusters................................................................................28 2-9 File System Layout for Oracle RAC in SGeSAP CFS Cluster....................................................................29 2-10 File System Layout for SAPDB Clusters..............................................................................................31 3-1 Hosts and Device Minor Numbers.....................................................................................................48 3-2 Groupfile File Groups......................................................................................................................51 3-3 Password File Users.........................................................................................................................52 3-4 Services on the Primary Node..........................................................................................................52 3-5 Relocatable IP Address Information....................................................................................................56 3-6 Overview of reasonable ASTREAT values............................................................................................74 3-7 Optional Parameters and Customizable Functions List...........................................................................77 3-8 Working with the two parts of the file................................................................................................85 3-9 IS1130 Installation Step....................................................................................................................91 4-1 Supported SGeSAP lc package types.................................................................................................93 4-2 File System Layout for liveCache Package running separate from APO (Option 1).....................................95 4-3 File System Layout for liveCache in a non-MAXDB Environment (Option 2)..............................................96 4-4 General File System Layout for liveCache (Option 3)............................................................................97 4-5 File System Layout for Hot Standby liveCache......................................................................................97 5-1 MDM User Interface and Command Line Components.......................................................................112 5-2 MDM Server Components..............................................................................................................112 5-3 MDM parameter descriptions.........................................................................................................133 5-4 MDM_MGROUP and MDM_MASTER dependencies..........................................................................133 7 8 Printing History Table 1 Editions and Releases Printing Date Part Number Edition SGeSAP Release Operating System Releases June 2000 B7885-90004 Edition 1 B.03.02 HP-UX 10.20 and HP-UX 11.00 March 2001 B7885-90009 Edition 2 B.03.03 HP-UX 10.20, HP-UX 11.00 and HP-UX 11i June 2001 B7885-90011 Edition 3 B.03.04 HP-UX 10.20, HP-UX 11.00 and HP-UX 11i March 2002 B7885-90013 Edition 4 B.03.06 HP-UX 11.00 and HP-UX 11i June 2003 B7885-90018 Edition 5 B.03.08 HP-UX 11i December 2004 T2357-90007 Edition 6 B.03.12 HP-UX 11i and HP-UX 11i v2 December 2005 T2357-90009 Edition 7 B.04.00 HP-UX 11i and HP-UX 11i v2 March 2006 T2803-90002 Edition 8 B.04.01 HP-UX 11i and HP-UX 11i v2 February 2007 T2803-90004 Edition 9 B.04.50 HP-UX 11i v3 November 2007 T2803-90011 Edition 10 B.04.02 / B.04.51 HP-UX 11i , HP-UX 11i v2 and HP-UX 11i v3 March 2009 T2803-90013 Edition 11 B.05.00 HP-UX 11i v2 and HP-UX 11i v3 The printing date and part number indicate the current edition. The printing date changes when a new edition is printed. (Minor corrections and updates which are incorporated at reprint do not cause the date to change.) The part number changes when extensive technical changes are incorporated. New editions of this manual will incorporate all material updated since the previous edition. HP Printing Division: Business Critical Computing Business Unit Hewlett-Packard Co. 19111 Pruneridge Ave. Cupertino, CA 95014 About this Manual... This document describes how to configure and install highly available SAP systems on HP-UX 11i v2 and HP-UX 11i v3 using Serviceguard. It refers to HP product T2803BA - Serviceguard Extension for SAP (SGeSAP). To understand this document you have to have knowledge of the basic Serviceguard concepts and commands. Experience in the Basis Components of SAP will also be helpful. This manual consists of six chapters: • Chapter 1 "Understanding Serviceguard Extension for SAP" describes how to design a High Availability SAP Environment and points out how SAP components can be clustered. • Chapter 2 "Planning the Storage Layout" proposes the recommended high available file system and shared storage layout for the SAP landscape and database systems. • Chapter 3 "Step-by-Step Cluster Conversion" describes the installation of SGeSAP step-by-step down to the HP-UX command level. • Chapter 4 "SAP Supply Chain Management" specifically deals with the SAP SCM and liveCache technology, gives a Storage Layout proposal and leads through the SGeSAP cluster conversion step-by-step. • Chapter 5 "SAP Master Data Management" specifically deals with the SAP MDM technology and leads through the SGeSAP cluster conversion step-by-step. • Chapter 6 "SGeSAP Cluster Administration" covers SGeSAP Administration aspects, as well as the use of different HP-UX platforms in a mixed cluster environment. About this Manual... 9 Table 2 Abbreviations Abbreviation Meaning, , , System ID of the SAP system, RDBMS or other components in uppercase/lowercase SAP instance, e.g. DVEBMGS, D, J, ASCS, SCS, ERS [A]SCS refers to either an SCS or an ASCS instance , instance number of the SAP system , , names mapped to local IP addresses of the client LAN , , names mapped to relocatable IP addresses of SG packages in the client LAN , , names mapped to local IP addresses of the server LAN , , names mapped to relocatable IP addresses of Serviceguard packages in the server LAN <...> other abbreviations are self-explanatory and can be derived from the surrounding context Related Documentation The following documents contain additional related information: 10 • Serviceguard Extension for SAP Versions B.05.00 Release Notes (T2803-900012) • Managing Serviceguard (B3936-90135) • Serviceguard Release Notes (B3936-90119) • Serviceguard NFS Toolkit A.11.11.08 and A.11.23.07 Release Notes (B5140-90032) • Serviceguard NFS Toolkits A.11.31.03 Release Notes (B5140-90038) • HP Storageworks RAID Manager XP user guide (T1610-96005) Printing History 1 Designing SGeSAP Cluster Scenarios This chapter introduces the basic concepts used by the HP Serviceguard Extension for SAP (SGeSAP) and explains several naming conventions. The following sections provide recommendations and examples for typical cluster layouts that can be implemented for SAP environments: • General Concepts of SGeSAP • Mutual Failover Scenarios Using the Two Package Concept • One Package Concept • Follow-and-Push Clusters with Replicated Enqueue • Dedicated NFS Packages • Dialog Instance Clusters as Simple Tool for Adaptive Enterprises • Handling of Redundant Dialog Instances • Dedicated Failover Host General Concepts of SGeSAP SGeSAP extends HP Serviceguard's failover cluster capabilities to SAP application environments. SGeSAP continuously monitors the health of each SAP cluster node and automatically responds to failures or threshold violations. It provides a flexible framework of package templates to easily define cluster packages that protect various components of a mission-critical SAP infrastructure. SGeSAP provides a flexible framework of package templates to easily define cluster packages that protect various components of a mission-critical SAP infrastructure. SGeSAP provides a single, uniform interface to cluster ABAP-only, JAVA-only, and add-in installations of SAP Web Application Servers (SAP WAS). Support includes SAP R/3 kernel, mySAP components, SAP Application Server for ABAP, SAP Application Server for JAVA, and SAP Netweaver based SAP applications in a range of supported release versions as specified in the separately available release notes. The clustered SAP components include SAP ABAP Central Services, SAP JAVA Central Services, SAP ABAP Application Servers, SAP JAVA Application Servers, SAP Central Instances, SAP Enqueue Replication Servers, Oracle single-instance databases, MAXDB databases, SAP liveCache and SAP MDM components. For some platforms, support for liveCache hot standby clusters is included. It is possible to combine all clustered components of a single SAP system into one failover package for simplicity and convenience. There is also full flexibility to split components up into several packages to avoid unwanted dependencies and to lower potential failover times. Multiple SAP applications of different type and release version can be consolidated in a single cluster. SGeSAP enables SAP instance virtualization. It is possible to use SGeSAP to move redundant SAP ABAP Application Server Instances between hosts to quickly adapt to changing resource demands or maintenance needs. SGeSAP allows utilizing a combination of HP 9000 and HP Integrity servers in a mixed cluster with heterogeneous failover of SAP packages. SAP applications can be divided into one or more distinct software components. Most of these components share a common technology layer, the SAP Application Server (SAPWAS). The SAP Application Server is the central building block of the SAP Netweaver technology. Each Application Server implementation comes with a characteristic set of software Single Points of Failure. These will become installed across the cluster hardware according to several high availability considerations and off-topic constraints, resulting in an individual configuration recommendation. There are various publications available from SAP and third parties that describe the software components used by SAP applications in more detail. It is recommended to refer to these documents to get a basic familiarity before continuing to read. It is also recommended to familiarize with Serviceguard clustering and virtualization by reading the Serviceguard product manual "Managing Serviceguard", fifteenth edition or higher. The latest version can always be found at http://docs.hp.com/en/ha.html#Serviceguard. Serviceguard packages can be distinguished into legacy packages and module-based packages. SGeSAP provides solutions for both approaches. SGeSAP consists of several SAP-related modules, legacy script General Concepts of SGeSAP 11 templates, SAP software service monitors as well as specialized additional features to integrate hot standby liveCache scenarios, HP Workload Management scenarios and HP Event Monitors. There are three major Serviceguard modules delivered with SGeSAP. For the standard SAP Netweaver web application server stack it provides a Serviceguard module called sgesap/sapinstance. This module can be used to easily add a set of SAP instances that belong to the same Netweaver-based system to a module-based Serviceguard package. The package can encapsulate the failover entity for a combination of ABAP-stack, JAVA-stack and dual-stack instances plus, optionally, either Central Service Instances or Enqueue Replication Service Instances of an SAP System. For MAXDB or Oracle-based SAP database services, the module sgesap/dbinstance can be used. The module to cluster SAP liveCache instances is called sgesap/livecache. In addition to these three major modules, there are two more modules that enable easy clustering of smaller SAP infrastructure software tools sgesap/sapinfra and allow to manipulate the behavior of non-clustered SAP instances sgesap/sapextinstance. The covered infrastructure tools include the SAP sapccmsr, saposcol, rfcadapter and saprouter binaries. Other SGeSAP module names exist that provide a combination or subset of the functionality of some of the five modules mentioned above. They were primarily defined for convenience reasons to simplify configuration steps for standard use cases. In legacy packaging each software Single Point of Failure defines a SGeSAP package type. SGeSAP follows a consistent naming convention for these package types. The naming conventions were created to be independent of SAP software release versions. This allows to use a similar approach for each SPOF, regardless of whether it appears in the latest SAP Netweaver stack or a SAP software that was released before the first design of SAP Application Server. Older SAP components sometimes only support a subset of the available clustering options. Defining a mixture of legacy packages and module-based packages is possible in the same cluster. MDM packages, cross-subnet extensions for non-production use and SAP dispatcher monitoring are currently available in legacy packages only. Legacy-based packages will be discontinued at a later point in time. By then, all SGeSAP functionality will be available in a module version. Table 1-1 Mapping the SGeSAP legacy package types to SGeSAP modules and different SAP naming conventions SGeSAP legacy package type SGeSAP module names Commonly used SAP instance names ci sgesap/sapinstance alternatives: sgesap/scs sgesap/ci DVEBMGS (as Central Instance), ASCS sgesap/sapinstance alternatives: sgesap/ers AREP, ENR, ERS jci arep rep d SCS REP, ENR, ERS sgesap/sapinstance jd D, DVEBMGS (new) JDI, JD, JC, J db sgesap/dbinstance alternatives:sgesap/db sgesap/maxdb sgesap/oracledb lc sgesap/livecache Mutual Failover Scenarios Using the Two Package Concept Most SAP applications rely on two central software services that define the major software Single Point of Failure (SPOF) for SAP environments: the SAP Enqueue Service and the SAP Message Service. These services are traditionally combined and run as part of a unique SAP Instance that is referred to as JAVA System Central Service Instance (SCS) for SAP JAVA applications or ABAP System Central Service Instance (ASCS) for SAP ABAP applications. If an SAP application has both JAVA and ABAP components, it is possible to have both - an SCS and an ASCS instance - for one SAP application. In this case, both instances are SPOFs that require clustering. In pure ABAP environments, the term Central Instance (ci) is still in use for a software entity that combines further SAP application services with these SPOFs in a single instance. As any other SAP instance, a Central Instance has an Instance Name. Traditionally it is called DVEBMGS. Each letter represents a service that is delivered by the instance. The "E" and the "M" stand for the Enqueue and Message Service that were 12 Designing SGeSAP Cluster Scenarios identified as SPOFs in the system. Other SAP services can potentially be installed redundantly within additional Application Server instances, sometimes called Dialog Instances. As its naming convention may suggest, DVEBMGS, there are more services available within the Central Instance than just those that cause the SPOFs. An undesirable result of this is, that a Central Instance is a complex software with a high resource demand. Shutdown and startup of Central Instances is slower and more error-prone than they could be. Starting with SAP Application Server 6.40 it is possible to isolate the SPOFs of the Central Instance in a separate Instance that is then called the ABAP System Central Service Instance, in short ASCS. The installer for SAP Application Server allows to install ASCS automatically. This installation procedure will then also create a standard Dialog Instance that is called DVEBMGS for compatibility reasons. This kind of DVEBMGS instance provides no Enqueue Service and no Message Service and is not a Central Instance anymore. A package that uses the sgesap/sapinstance module can be set up to cluster the SCS and/or ASCS (or Central Instance) of a single SAP application. The SGeSAP legacy ci package contains either a full DVEBMGS instance or a ASCS instance. The SGeSAP legacy jci packge contains a SCS instance. In any case, these packages provides failover capabilities to SAP Enqueue Services and SAP Message Services. SAP application servers also require a database service, which usually defines the second software SPOF. SGeSAP bundles cluster capabilities for single-instance ORACLE RDBMS and SAP MAXDB RDBMS database services. The module sgesap/dbinstance (and similarly the legacy package type db) clusters any of these databases. The module unifies the configuration, so that database package administration for all database vendors is treated identically. sgesap/dbinstance can be used with any type of SAP application, independent of whether it is ABAP-based or JAVA-based or both. In case they are available, the module will take advantage of database tools that are shipped with certain SAP applications. A SGeSAP legacy jdb package contains a database instance for SAP JAVA applications. A SGeSAP legacy db package contains a database instance for an ABAP application or a combined ABAP and JAVA application. NOTE: It is not allowed to specify a single SGeSAP package with two database instances in it. An environment with db and jdb requires at least two packages to be defined. If you are planning a simple three-tier SAP layout in a two node cluster, use the SGeSAP mutual failover model. This approach distinguishes two SGeSAP packages, one for the database SPOF and the other for the SAP SPOFs as defined above. In small and medium size environments, the database package gets combined with HA NFS server functionality to provide all filesystems that are required by the software in both packages. During normal operation, the two packages are running on different nodes of the cluster. NOTE: • Module-based SGeSAP database packages cannot be combined with a legacy based NFS toolkit to create a single package. • The major advantage of this approach is, that the failed SAP package will never cause a costly failover of the underlying database since it is separated in a different package. • It is not a requirement to do so, but it can help to reduce the complexity of a cluster setup, if SCS and ASCS are combined in a single package. Under these circumstances, it needs to be considered that the failure of one of the two instances will also cause failover for the other instance. This might be tolerable in those cases in which SAP replication instances are configured (see below). The process of failover results in downtime that typically lasts a few minutes, depending on the work in progress when the failover takes place. A main portion of downtime is needed for the recovery of a database. The total recovery time of a failed database can not be predicted reliably. By tuning the Serviceguard heartbeat on a dedicated heartbeat LAN, it is possible to achieve failover times in the range of about a minute or two for a ci package that contains a lightweight [A]SCS instance without database. NOTE: sgesap/sapinstance packages can identify the state of a corresponding sgesap/dbinstance package in the same cluster without the requirement of explicitly configuring Serviceguard package Mutual Failover Scenarios Using the Two Package Concept 13 dependencies. The information is for example used to delay SAP instance package startups while the database is starting in a separate package, but not yet ready to accept connections. A cluster can be configured in a way that two nodes back up each other. The principle layout is depicted in figure 1-1. This picture as well as the following drawings are meant to illustrate basic principles in a clear and simple fashion. They omit other aspects and the level of detail that would be required for a reasonable and complete high availability configuration. Figure 1-1 Two-Package Failover with Mutual Backup Scenario It is a best practice to base the package naming on the SAP instance naming conventions whenever possible. Each package name should also include the SAP System Identifier (SID) of the system to which the package belongs. If similar packages of the same type get added later, they have a distinct namespace because they have a different SID. Example: A simple mutual failover scenario for an ABAP application defines two packages, called dbSID and ascsSID (or ciSID for old SAP releases). Robust Failover Using the One Package Concept In a one-package configuration, the database, NFS and SAP SPOFs run on the same node at all times and are configured in one SGeSAP package. Other nodes in the Serviceguard cluster function as failover nodes for the primary node on which the system runs during normal operation. NOTE: Module-based SGeSAP packages cannot be combined with a legacy based NFS toolkit to create a single package. It is not required to maintain an expensive idle standby. SGeSAP allows to utilize the secondary node(s) with different instances during normal operation. A common setup installs one or more non-mission critical SAP Systems on the failover nodes, typically SAP Consolidation, Quality Assurance or Development Systems. They can gracefully be shutdown by SGeSAP during failover to free up the computing resources for the critical production system. For modular packages, the sgesap/sapextinstance module can be added to the package to allow specifying this kind of behavior. Development environments tend to be less stable than production systems. This should be taken into consideration before mixing these use-cases in a single cluster. A feasible alternative would be to install Dialog Instances of the production system on the failover node. 14 Designing SGeSAP Cluster Scenarios If the primary node fails, the database and the Central Instance fail over and continue functioning on an adoptive node. After failover, the system runs without any manual intervention needed. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can either stay up or be restarted triggered by a failover. A sample configuration in Figure 1-2 shows node1 with a failure, which causes the package containing the database and central instance to fail over to node2. A Quality Assurance System and additional Dialog Instances get shut down, before the database and Central Instance are restarted. Figure 1-2 One-Package Failover Scenario Follow-and-Push Clusters with Replicated Enqueue In case an environment has very high demands regarding guaranteed uptime, it makes sense to activate a Replicated Enqueue with SGeSAP. With this additional mechanism it is possible to failover ABAP and/or JAVA System Central Service Instances without impacting ongoing transactions on Dialog Instances. NOTE: It only makes sense to activate Enqueue Replication for systems that have Dialog Instances that run on nodes different from the primary node of the System Central Service package. Each SAP Enqueue Service maintains a table of exclusive locks that can temporarily be granted exclusively to an ongoing transaction. This mechanism avoids inconsistencies that could be caused by parallel transactions that access the same data in the database simultaneously. In case of a failure of the Enqueue Service, the table with all locks that have been granted gets lost. After package failover and restart of the Enqueue Service, all Dialog Instances need to get notified that the lock table content got lost. As a reaction they cancel ongoing transactions that still hold granted locks. These transactions need to be restarted. Enqueue Replication provides a concept that prevents the impact of a failure of the Enqueue Service on the Dialog Instances. Transactions no longer need to be restarted. The Enqueue Server has the ability to create a copy of its memory content to a Replicated Enqueue Service that needs to be running as part of a Enqueue Replication Service Instance (ERS) on a remote host. This is a real-time copy mechanism and ensures that the replicated memory accurately reflects the status of the Enqueue Server at all times. There might be two ERS instances for a single SAP system, replicating SCS and ASCS locks separately. Follow-and-Push Clusters with Replicated Enqueue 15 Figure 1-3 Replicated Enqueue Clustering for ABAP and JAVA Instances Enqueue Services also come as integral part of each ABAP DVEBMGS Central Instance. This integrated version of the Enqueue Service is not able to utilize replication features. The DVEBMGS Instance needs to be split up in a standard Dialog Instance and a ABAP System Central Service Instance (ASCS). The SGeSAP packaging of the ERS Instance provides startup and shutdown routines, failure detection, split-brain prevention and quorum services to the mechanism. SGeSAP also delivers an EMS (HP-UX Event Monitoring Service) that implements a cluster resource called /applications/sap/enqor/ ers for each Replicated Enqueue in the cluster. Monitoring requests can be created to regularly poll the status of each Replicated Enqueue. NOTE: For SAP versions that were released before the ERS naming conventions got introduced, the resource is also offered, but called /applications/sap/enqor/ [a]scs. The EMS monitor can be used to define a resource in the Serviceguard packages. This implements a follow-and-push behavior for the two packages that include enqueue and its replication. As a result, an automatism will make sure that enqueue and its replication server are never started on the same node initially. Enqueue will not invalidate the replication accidentally by starting on a non-replication node while replication is active elsewhere. It is possible to move the package with the replication server to any free node in a multi-node cluster without a requirement to reconfigure the enqueue package failover policy. During failover of enqueue, its replication will be located dynamically and the enqueue restarts on the currently active replication node. Enqueue synchronizes with the local replication server. As a next step, the package with the replication service shuts down automatically and restarts on a healthy node, if available. In case of a failover in a multi-node environment this implements a self-healing capability for the replication function. Enqueue will failover to just any node from the list of statically configured hosts if no replication package is running. Two replication instances are required if Enqueue Replication Services are to be used for both the JAVA stack and the ABAP stack. From this approach several configuration options derive. In most cases, it is the best practice to create separate packages for ASCS, SCS and the two ERS instances. It is also supported to combine the replication instances within one SGeSAP package. It is also supported to combine ASCS and SCS in one package, but only if the two ERS instances are likewise combined in another package. It is not supported to combine ASCS and SCS in one package and keep the two ERS instances in two separate 16 Designing SGeSAP Cluster Scenarios packages. Otherwise, situations can arise in which a failover of the combined ASCS/SCS package is not possible. Finally, ASCS can not be combined with its ERS instance (AREP) in the same package. For the same reason, SCS can not be combined with its ERS instance (REP). The sgesap/sapinstance module can be used to cluster Enqueue Replication Instances. Furthermore, SGeSAP offers the legacy package types rep and arep to implement enqueue replication packages for JAVA and ABAP. SAP offers two possibilities to configure Enqueue Replication Servers: 1. 2. SAP self-controlled using High Availability polling Completely High Availability failover solution controlled SGeSAP provides a completely High Availability failover solution controlled implementation that avoids costly polling data exchange between SAP and the High Availability cluster software. There are several SAP profile parameters that are related to the self-controlled approach. Most of these parameters have names that start with the string enque/enrep/hafunc_. They will not have any effect in SGeSAP clusters. Dedicated NFS Packages Small clusters with only a few SGeSAP packages usually provide HA NFS by combining the HA NFS toolkit package functionality with the SGeSAP packages that contain a database component. The HA NFS toolkit is a separate product with a set of configuration and control files that must be customized for the SGeSAP environment. It needs to be obtained separately. HA NFS is delivered in a distributed fashion with each database package serving its own filesystems. By consolidating this into one package, all NFS serving capabilities can be removed from the database packages.In complex, consolidated environments with several SGeSAP packages it is of significant help to use one dedicates HA NFS package instead of blending this into existing packages. A dedicated SAPNFS package is specialized to provide access to shared filesystems that are needed by more than one mySAP component. Typical filesystems served by SAPNFS would be the common SAP transport directory or the global SAPDB executable directory. The SAPDB client libraries are part of the global SAPDB executable directory and access to these files is needed by APO and liveCache at the same time. SGeSAP setups are designed to avoid HA NFS shared filesystems with heavy traffic if possible. For many implementations, this gives the option to use one SAPNFS package for all HA NFS needs in the SAP consolidation cluster without the risk to create a serious performance bottleneck. HA NFS might still be required in configurations that use Cluster File Systems in order to provide access to the SAP transport directories to SAP instances that run on hosts outside of the cluster. Dialog Instance Clusters as Simple Tool for Adaptive Enterprises Databases and Central Instances are Single Points of Failure. ABAP and JAVA Dialog Instances can be installed in a redundant fashion. In theory, this allows to avoid additional SPOFs in Dialog Instances. This doesn't mean that it is impossible to configure the systems including SPOFs on Dialog Instances. A simple example for the need of a SAP Application Server package is to protect dedicated batch servers against hardware failures. Any number of SAP Application Server instances can be added to a package that uses the module sgesap/sapinstance. SAP ABAP Dialog Instances can also be packages in SGeSAP legacy package type 'd'. SAP JAVA Dialog Instances can be packaged using SGeSAP legacy package type 'jd'. Dialog Instance packages allow an uncomplicated approach to achieve abstraction from the hardware layer. It is possible to shift around Dialog Instance packages between servers at any given time. This might be desirable if the CPU resource consumption is eventually balanced poorly due to changed usage patterns. Dialog Instances can then be moved between the different hosts to address this. A Dialog Instance can also be moved to a standby host to allow planned hardware maintenance for the node it was running on. One can simulate this flexibility by installing Dialog Instances on every host and activating them if required. This might be a feasible approach for many purposes and saves the need to maintain virtual IP addresses for each Dialog Instance. But there are ways that SAP users unintentionally create additional short-term SPOFs during operation if they reference a specific instance via its hostname. This could e.g. be done during batch scheduling. With Dialog Instance packages, the system becomes invulnerable against this type of user error. Dedicated NFS Packages 17 Dialog Instance virtualization packages provide high availability and flexibility at the same time. The system becomes more robust using Dialog Instance packages. The virtualization allows moving the instances manually between the cluster hosts on demand. Figure 1-4 Failover Node with Application Server package Figure 1-4 illustrates a common configuration with the adoptive node running as a Dialog Server during normal operation. Node1 and node2 have equal computing power and the load is evenly distributed between the combination of database and Central Service Instance on node1 and the additional Dialog Instance on node2. If node1 fails, the Dialog Instance package will be shut down during failover of the dbciSID package. This is similar to a one-package setup without Dialog Instance packaging. The advantage of this setup is, that after repair of node1, the Dialog Instance package can just be restarted on node1 instead of node2. This saves downtime that would otherwise be necessary caused by a failback of the dbciSID package. The two instances can be separated to different machines without impacting the production environment negatively. It should be noted that for this scenario with just two hosts there is not necessarily a requirement to enable automatic failover for the Dialog Instance package. The described shutdown operation for Dialog Instance packages can be specified in any SGeSAP legacy package directly. In modularized SGeSAP it is recommended to use generic Serviceguard package dependencies instead. Handling of Redundant Dialog Instances Non-critical SAP Application Servers can be run on HP-UX, SUSE or RedHat LINUX application server hosts. These hosts do not need to be part of the Serviceguard cluster. Even if the additional SAP services are run on nodes in the Serviceguard cluster, they are not necessarily protected by Serviceguard packages. A combination of Windows/HP-UX application servers is technically possible but additional software is required to access HP-UX filesystems or HP-UX-like remote shells from the Windows system. All non-packaged ABAP instances are subsequently called Additional Dialog Instances or sometimes synonymously Additional SAP Application Servers to distinguish them from mission-critical Dialog Instances. An additional Dialog Instance that runs on a cluster node is called an Internal Dialog Instance. External Dialog Instances run on HP-UX or Linux hosts that are not part of the cluster. Even if Dialog Instances are external to the cluster, they may be affected by package startup and shutdown. 18 Designing SGeSAP Cluster Scenarios For convenience, Additional Dialog Instances can be started, stopped or restarted with any SGeSAP package that secures critical components. Some SAP applications require the whole set of Dialog Instances to be restarted during failover of the Central Service package. This can be triggered with SGeSAP means. It helps to better understand the concept, if one considers that all of these operations for non-clustered instances are considered inherently non-critical. If they fail, this failure won’t have any impact on the ongoing package operation. A best-effort attempt is made, but there are no guarantees that the operation succeeds. If such operations need to succeed, package dependencies in combination with SGeSAP Dialog Instance packages need to be used. Dialog Instances can be marked to be of minor importance. They will then be shut down, if a critical component fails over to the host they run on in order to free up resources for the non-redundant packaged components. Additional Dialog Instances never get reflected in package names. The described functionality can be achieved by adding the module sgesap/sapextinstance to the package. Legacy SGeSAP provides similar functionality, but SAP JAVA instances are not handled. NOTE: Declaring non-critical Dialog Instances in a package configuration doesn't add them to the components that are secured by the package. The package won't react to any error conditions of these additional instances. The concept is distinct from the Dialog Instance packages that got explained in the previous section. If Additional Dialog Instances are used, certain rules should be followed: Use saplogon with Application Server logon groups. When logging on to an application server group with two or more Dialog Instances, you don't need a different login procedure even if one of the Application Servers of the group fails. Also, using login groups provides workload balancing between Application Servers. Avoid specifying a destination host when defining a batch job. This allows the batch scheduler to choose a batch server that is available at the start time of the batch job. If you must specify a destination host, specify the batch server running on the Central Instance or a packaged Application Server Instance. Print requests stay in the system until a node is available again and the Spool Server has been restarted. These requests could be moved manually to other spool servers if one spool server is unavailable for a long period of time. An alternative is to print all time critical documents through the highly available spool server of the central instance. Configuring the Update Service as part of the packaged Central Instance is recommended. Consider using local update servers only if performance issues require it. In this case, configure Update Services for application services running on the same node. This ensures that the remaining SAP Instances on different nodes are not affected if an outage occurs on the Update Server. Otherwise a failure of the Update Service will lead to subsequent outages at different Dialog Instance nodes. Dedicated Failover Host More complicated clusters that consolidate a couple of SAP applications often have a dedicated failover server. While each SAP application has its own set of primary nodes, there is no need to also provide a failover node for each of these servers. Instead there is one commonly shared secondary node that is in principle capable to replace any single failed primary node. Often, some or all of the primary nodes are partitions of a large server. Dedicated Failover Host 19 Figure 1-5 Replicated Enqueue Clustering for ABAP and JAVA Instances Figure 1-5 shows an example configuration. The dedicated failover host can serve many purposes during normal operation. With the introduction of Replicated Enqueue Servers, it is a good practice to consolidate a number of Replicated Enqueues on the dedicated failover host. These replication units can be halted at any time without disrupting ongoing transactions for the systems they belong to. They are ideally sacrificed in emergency conditions in which a failing database and/or Central Service Instance need the spare resources. 20 Designing SGeSAP Cluster Scenarios 2 Planning the Storage Layout Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. Two volume managers can be used with Serviceguard: the standard Logical Volume Manager (LVM) of HP-UX and the Veritas Volume Manager (VxVM). SGeSAP can be used with both volume managers. The following steps describe two standard setups for the LVM volume manager. VxVM setups can be configured accordingly. A third storage layout option describes a Cluster File System configuration for SGeSAP.In this case, VxVM must be used and all Application Servers need to run on cluster nodes. Chapter three explores the concepts and details the implementation steps discussed in this chapter. Database storage layouts for usage with parallel databases are only briefly described for Oracle Real Application Clusters. Detailed configuration steps for parallel database technologies are not covered in this manual. Additional information about SGeSAP and parallel databases is being released as whitepapers from HP. Refer to the Additional Reading section of the relevant SGeSAP release notes to verify the availability of whitepapers in this area. This chapter discusses disk layout for clustered SAP components and database components of several vendors on a conception level. It is divided into two main sections: • SAP Instance Storage Considerations • Database Instance Storage Considerations SAP Instance Storage Considerations In general, it is important to stay as close as possible to the original layout intended by SAP. But certain cluster specific considerations might suggest a slightly different approach in some cases. SGeSAP supports various combinations of providing shared access to file systems in the cluster. The possible storage layout and file system configuration options include: Each filesystem that gets added by the SAP installation routines needs to be classified and a decision has to be made. Refer to table 2-1 for more information. Table 2-1 Option descriptions Option: Description 1 - SGeSAP NFS Cluster optimized to provide maximum flexibility. Following the recommendations given below allows for expansion of existing clusters without limitations caused by the cluster. Another important design goal of SGeSAP option 1 is, that a redesign of the storage layout is not imperative when adding additional SAP components later on. Effective change management is an important aspect for production environments. The disk layout needs to be as flexible as possible to allow growth to be done by just adding storage for newly added components. If the design is planned carefully at the beginning, it is not required to make changes to already existing file systems. Option 1 is recommended for environments that implement clusters with server consolidation if CFS is not available. 2 - SGeSAP NFS Idle Standby Cluster optimized to provide maximum simplicity. The option is only feasible for very simple clusters. It needs to be foreseeable that their layout and configuration won't change over time. It comes with the disadvantage of being locked into restricted configurations with a single SAP System and idle standby nodes. HP recommends option 1 in case of uncertainty about potential future layout changes. 3 - SGeSAP CFS Cluster combines maximum flexibility with the convenience of a Cluster File System. It is the most advanced option. CFS should be used with SAP if available. The HP Serviceguard Cluster File System requires a set of multi-node packages. The number of packages varies with the number of disk groups and mountpoints for Cluster File Systems. This can be a limiting factor for highly consolidated SGeSAP environments. Each filesystem that gets added to a system by SAP installation routines needs to be classified and a decision has to be made: SAP Instance Storage Considerations 21 • Whether it needs to be kept as a local copy on internal disks of each node of the cluster. • Whether it needs to be shared on a SAN storage device to allow failover and exclusive activation. • Whether it needs to provide shared access to more than one node of the cluster at the same time. NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel sometimes also patches SAP tools. Depending on what SAP changed, this might introduce additional dependencies on shared libraries that weren't required before the patch. Depending on the SHLIB_PATH settings of the root user it might no longer be possible for SGeSAP to execute the SAP tools after applying the patch. The introduced additional libraries are not found. Creating local copies of the complete central executable directory prevents this issue. The following sections detail the three different storage options. Option 1: SGeSAP NFS Cluster With this storage setup SGeSAP makes extensive use of exclusive volume group activation. Concurrent shared access is provided via NFS services. Automounter and cross-mounting concepts are used in order to allow each node of the cluster to switch roles between serving and using NFS shares. It is possible to access the NFS file systems from servers outside of the cluster, which is an intrinsic part of many SAP configurations. Common Directories that are Kept Local The following common directories and their files are kept local on each node of the cluster: • /home/ adm — the home directory of the SAP system administrator with node specific startup log files • /usr/sap/ /SYS/exe/run — the directory that holds a local copy of all SAP instance executables, libraries and tools (optional for kernel 7.x and higher) • /usr/sap/tmp — the directory in which the SAP operating system collector keeps monitoring data of the local operating system • /usr/sap/hostctrl — the directory in which SAP control services for the local host are kept (kernel 7.x and higher) • /etc/cmcluster — the directory in which Serviceguard keeps its legacy configuration files and the node specific package runtime directories • Depending on database vendor and version, it might in addition be required to keep local database client software. Details can be found in the database sections below. Part of the content of the local group of directories must be synchronized manually between all nodes of the cluster. Serviceguard provides a tool cmcp(1m) that allows easy replication of a file to all cluster nodes. SAP instance (startup) profile names contain either local hostnames or virtual hostnames. SGeSAP will always prefer profiles that use local hostnames to allow individual startup profiles for each host, which might be useful if the failover hardware differs in size. In clustered SAP environments prior to 7.x releases it is required to install local executables. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory. This includes shared libraries delivered by SAP. NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel sometimes also patches SAP tools. Depending on what SAP changed, this might introduce additional dependencies on shared libraries that weren't required before the patch. Depending on the SHLIB_PATH settings of the root user it might no longer be possible for SGeSAP to execute the SAP tools after applying the patch. The introduced additional libraries are not found. Creating local copies of the complete central executable directory prevents this issue. To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. 22 Planning the Storage Layout Directories that Reside on Shared Disks Volume groups on SAN shard storage are configured as part of the SGeSAP packages. They can be either: • instance specific or • system specific or • environment specific. Instance specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. System specific volume groups get accessed from all instances that belong to a particular SAP System. Environment specific volume groups get accessed from all instances that belong to all SAP Systems installed in the whole SAP environment. System and environment specific volume groups are set up using HA NFS to provide access for all instances. They shouldn't be part of a package that is only dedicated to a single SAP instance if there are several of them. If this package is down, then other instances would also be impacted. As a rule of thumb, it is a good default to put all these volume groups into a package that holds the database of the system. These filesystems often provide tools for database handling that don't require the SAP instance at all. In consolidated environments with more than one SAP application component it is recommended to separate the environment specific volume groups to a dedicated HA NFS package. This package will be referred to as sapnfs package. It should remain running all the time, since it is of central importance for the whole setup. Since sapnfs is just serving networked file systems, there is rarely needed to stop this package at any time. If environment specific volume groups become part of a database package, there will be a potential dependency between packages of different SAP Systems. Stopping one SAP System by halting all related Serviceguard packages will then lead to a lack of necessary NFS resources for otherwise unrelated SAP Systems. The sapnfs package avoids this unpleasant dependency. It is an option to also move the system specific volume groups to the sapnfs package. This can be done, to keep HA NFS mechanisms completely separate. A valuable naming convention for most of these shared volume groups is vg or alternatively vg . Table 2-2 and Table 2-3 provide an overview of SAP shared storage and maps them to the component and package type for which they occur. Table 2-2 Instance Specific Volume Groups for exclusive activation with a package Mount Point Access Point Recommended packages /usr/sap/ /SCS Shared disk jci (scs ) VG Name Device minor number jdbjci /usr/sap/ /ASCS ci (ascs ) dbci /usr/sap/ /ERS ers /usr/sap/ /DVEBMGS ci dbci d (SAP kernel 7.x) /usr/sap/ /D d SAP Instance Storage Considerations 23 Table 2-3 System and Environment Specific Volume Groups Mount Point Access Point Potential owning packages VG Name /export/sapmnt/ shared disk and HA NFS db Device minor number dbci jdb jdbjci sapnfs /export/usr/sap/trans db dbci sapnfs /usr/sap/put shared disk none The tables can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes. /usr/sap/ should not be added to a package, since using this as a dynamic mount point would prohibit access to the instance directories of locally installed additional SAP application servers. The /usr/sap/ mount point will also be used to store local SAP executables. This prevents problems with busy mount points during database package shutdown. Due to the size of the directory content, it should not be part of the local root file system. The /usr/sap/tmp might or might not be part of the root file system. This is the working directory of the operating system collector process saposcol. The size of this directory will rarely be beyond a few Megabytes. If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory should not be added to any package. This ensures that they are independent from any SAP WAS system and you can mount them on any host by hand if needed. All filesystems mounted below /export are part of HA NFS cross-mounting via automounter. The automounter uses virtual IP addresses to access the HA NFS directories via the path that comes without the /export prefix. This ensures that the directories are quickly available after a switchover. The cross-mounting allows coexistence of NFS server and NFS client processes on nodes within the cluster. Option 2: SGeSAP NFS Idle Standby Cluster This option has a simple setup, but it is severely limited in flexibility. In most cases, option 1 should be preferred. A cluster can be configured using option 2 if it fulfills all of the following prerequisites: • Only one SGeSAP package is configured in the cluster. Underlying database technology is a single-instance Oracle RDBMS. The package combines failover services for the database and all required NFS services and SAP central components (ABAP CI, SCS, ASCS). There are no Application Server Instances installed on cluster nodes. Replicated Enqueue is not in use. • There is no additional SAP software installed on the cluster nodes The use of a HA NFS service can be configured to export file systems to external Application Servers that manually mount them. A dedicated NFS package is not possible. Dedicated NFS requires option 1. Common Directories that are Kept Local The following common directories and their files are kept local on each node of the cluster: 24 • /home/ adm — the home directory of the SAP system administrator with node specific startup log files • /usr/sap/ /SYS/exe/run — the directory that holds a local copy of all SAP instance executables, libraries and tools (optional for kernel 7.x and higher) • /usr/sap/tmp — the directory in which the SAP operating system collector keeps monitoring data of the local operating system • /usr/sap/hostctrl — the directory in which SAP control services for the local host are kept (kernel 7.x and higher) Planning the Storage Layout • /etc/cmcluster — the directory in which Serviceguard keeps its legacy configuration files and the node specific package runtime directories • Local database client software needs to be stored locally on each node. Details can be found in the database sections below. Part of the content of the local group of directories must be synchronized manually between all nodes of the cluster. SAP instance (startup) profile names contain either local hostnames or virtual hostnames. SGeSAP will always prefer profiles that use local hostnames to allow individual startup profiles for each host, which might be useful if the failover hardware differs in size. In clustered SAP environments prior to 7.x releases it is required to install local executables. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory. This includes shared libraries delivered by SAP. To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Directories that Reside on Shared Disks Volume groups on a SAN shard storage get configured as part of the SGeSAP package. Instance specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. In this configuration option the instance specific volume groups are included in package. System specific volume groups get accessed from all instances that belong to a particular SAP System. Environment specific volume groups get accessed from all instances that belong to any SAP System installed in the whole SAP scenario. System and environment specific volume groups should be set up using HA NFS to provide access capabilities to SAP instances on nodes outside of the cluster. The cross-mounting concept of option 1 is not required. A valuable naming convention for most of these shared volume groups is vg or alternatively vg . Table 2-4 provide an overview of SAP shared storage for this special setup and maps them to the component and package type for which they occur. Table 2-4 File systems for the SGeSAP package in NFS Idle Standby Clusters Mount Point Access Point Remarks /sapmnt/ shared disk and HA NFS required /usr/sap/ shared disk /usr/sap/trans shared disk and HA NFS VG Name Device minor number optional The table can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes. If you have more than one system, place/usr/sap/put on separate volume groups created on shared drives. The directory should not be added to any package. This ensures that they are independent from any SAP WAS system and you can mount them on any host by hand if needed. Option 3: SGeSAP CFS Cluster SGeSAP supports the use of HP Serviceguard Cluster File System for concurrent shared access. CFS is available with selected HP Serviceguard Storage Management Suite bundles. CFS replaces NFS technology for all SAP related file systems. All related instances need to run on cluster nodes to have access to the shared files. SAP related file systems that reside on CFS are accessible from all nodes in the cluster. Concurrent reads or writes are handled by the CFS layer. Each required CFS disk group and each required CFS mount point SAP Instance Storage Considerations 25 requires a Serviceguard multi-node package. SGeSAP packages are Serviceguard single-node packages. Thus, a package can not combine SGeSAP and CFS related functionality. Common Directories that are Kept Local Most common file systems reside on CFS, but there are some directories and files that are kept local on each node of the cluster: • /etc/cmcluster — the directory in which Serviceguard keeps its configuration files and the node specific package runtime directories. • /home/ adm — the home directory of the SAP system administrator with node specific startup log files. • /usr/sap/tmp — the directory in which the SAP operating system collector keeps monitoring data of the local operating system. • /usr/sap/ /SYS/exe/run — optional directory for usage with sapcpe, i.e. a local copy of executables (optional for kernel 7.x and higher). • /usr/sap/hostctrl — the directory in which SAP control services for the local host are kept (kernel 7.x and higher) • Depending on database vendor and version, it might in addition be required to keep local database client software. Details can be found in the database sections below. Content of the local group of directories must be synchronized manually between all nodes of the cluster. An exception is the optional local directory for SAP executables /usr/sap/ /SYS/exe/run. It gets automatically synchronized as part of every instance startup and shutdown operation. A symbolic link called/usr/sap/ /SYS/exe/ctrun needs to be created to access the centrally shared location. Directories that Reside on CFS The following table shows a recommended example on how to design SAP file systems for CFS shared access. Table 2-5 File System Layout for SGeSAP CFS Clusters Mount Point Access Point Package Name Example /sapmnt/ shared disk and CFS SG-CFS-DG- SG-CFS-MP- /usr/sap/trans SG-CFS-DG- SG-CFS-MP- /usr/sap/ SG-CFS-DG- SG-CFS-MP- DG Name The table can be used to document used volume or disk groups across all cluster nodes. The /usr/sap/tmp might or might not be part of the local root file system. This is the working directory of the operating system collector process saposcol. The size of this directory will rarely be beyond a few Megabytes. Database Instance Storage Considerations SGeSAP internally supports clustering of database technologies of different vendors. The vendors have implemented individual database architectures. The storage layout for SGeSAP cluster environments needs to be discussed individually for each: 26 • Oracle Single Instance RDBMS • Oracle Real Application Clusters • MAXDB Storage Considerations Planning the Storage Layout Table 2-6 Availability of SGeSAP Storage Layout Options for Different Database RDBMS DB Technology Supported Platforms Oracle Single-Instance PA 9000 Itanium SGeSAP Storage Layout Options Cluster Software Bundles 1. Serviceguard or any Serviceguard Storage Management bundle (for Oracle) 2. SGeSAP 3. Serviceguard HA NFS Toolkit idle standby 1. Serviceguard 2. SGeSAP 3. Serviceguard HA NFS Toolkit (opt.) CFS 1. Serviceguard Cluster File System (for Oracle) 2. SGeSAP Oracle Real Application Cluster CFS 1. Serviceguard Cluster File System for RAC 2. SGeSAP SAPDB MAXDB NFS 1. Serviceguard or any Serviceguard Storage Management bundle 2. SGeSAP 3. Serviceguard HA NFS Toolkit Oracle Single Instance RDBMS Single Instance Oracle databases can be used with all three SGeSAP storage layout options. The setup for NFS and NFS Idle Standby Clusters are identical. Oracle databases in SGeSAP NFS and NFS Idle Standby Clusters Oracle server directories reside below /oracle/ . These directories get shared via the database package In addition, any SAP Application Server needs access to the Oracle client libraries, including the Oracle National Language Support files (NLS) shown in Table 2-7 NLS Files Default Location. The default location to which the client NLS files get installed differs with the SAP kernel release used: Table 2-7 NLS Files - Default Location Kernel Version Client NLS Location <=4.6 $ORACLE_HOME/ocommon/NLS[_ ]/admin/data 4.6 /oracle/ /ocommon/nls/admin/data 6.x, 7.x /oracle/client/ /ocommon/nls/admin/data It is important to notice, that there always is a second type of NLS directory, called the "server" NLS directory. It gets created during database or SAP Central System installations. The location of the server NLS files is identical for all SAP kernel versions: $ORACLE_HOME/common/nls/admin/data The setting of the ORA_NLS[ ]variable in the environments of adm and ora determines whether the client or the server path to NLS is used. The variable gets defined in the .dbenv_ .[c]sh files in the home directories of these users. During SGeSAP installation it is necessary to create local copies of the client NLS files on each host to which a failover could take place. SAP Central Instances use the server path to NLS files, while Application Server Instances use the client path. Sometimes a single host may have an installation of both a Central Instance and an additional Application Server of the same SAP System. These instances need to share the same environment settings. SAP recommends using the server path to NLS files for both instances in this case. This won't work with SGeSAP since switching the database would leave the application server without NLS file access. Oracle 9.x releases no longer maintain NLS compatibility with Oracle 8.x. Also, Oracle 9.x patches introduce incompatibilities with older Oracle 9.x NLS files. The following constraints need to be met: Database Instance Storage Considerations 27 1. The Oracle RDBMS and database tools rely on an ORA_NLS[ ] setting that refers to NLS files that are compatible to the version of the RDBMS. Oracle 9.x needs NLS files as delivered with Oracle 9.x. 2. The SAP executables rely on an ORA_NLS[ ] setting that refers to NLS files of the same versions as those that were used during kernel link time by SAP development. This is not necessarily identical to the installed database release. The Oracle database server and SAP server might need different types of NLS files. The server NLS files are part of the database Serviceguard package. The client NLS files are installed locally on all hosts. Special care has to be taken to not mix the access paths for ORACLE server and client processes. The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files. The following directories need to exist locally on all hosts on which an Application Server might run. They can not be relocated to different paths. The content needs to be identical to the content of the corresponding directories that are shared as part of the database SGeSAP package. The setup for these directories follows the "on top" mount approach, i.e., the directories might become hidden beneath identical copies that are part of the package: $ORACLE_HOME/rdbms/mesg $ORACLE_HOME/oracore/zoneinfo $ORACLE_HOME/network/admin Table 2-8 File System Layout for NFS-based Oracle Clusters Mount Point Access Point Potential Owning Packages $ORACLE_HOME shared disk dbci /oracle/ /saparch jdb” /sapreorg jdbjci /oracle/ /sapdata1 dbcijci VG Type Volume Group Device Minor Name Number db instance specific ... /oracle/ /sapdatan /oracle/ /origlogA /oracle/ /origlogB /oracle/ /mirrlogA /oracle/ /mirrlogB /oracle/client local none environment specific some local Oracle client files reside in /oracle/ as part of the root filesystem local none db instance specific Oracle Real Application Clusters Oracle Real Application Clusters (RAC) is an option to the Single Instance Oracle Database Enterprise Edition. Oracle RAC is a cluster database with shared cache architecture. The SAP certified solution is based on HP Serviceguard Cluster File System for RAC. Handling of a RAC database is not included in SGeSAP itself. RAC databases are treated by SGeRAC and Oracle tools, which integrate with SGeSAP. The configuration of SGeSAP packages for non-database components is identical to non-RAC environments. NOTE: The configurations are designed to be compliant to SAP OSS note 830982. The note describes SAP recommendations for Oracle RAC configurations. A support statement from SAP regarding RAC clusters on HP-UX can be found as part of SAP OSS note 527843. A current support statement in note 527843 is required, before any of the described RAC options can be implemented. The note maintenance is done by 28 Planning the Storage Layout SAP and the note content may change at any time without further notice. Described options may have "Controlled Availability" status at SAP. Real Application Clusters requires concurrent shared access to Oracle files from all cluster nodes. This can be achieved by installing the Oracle software on Cluster File Systems provided by HP Serviceguard Cluster File System for RAC. There are node specific files and directories, such as the TNS configuration. These files and directories are copied to a private file system on each node. The node specific files and directories are then removed from the shared disks and symbolic links of the same name are created, the targets of which are the corresponding files in the private file system. Table 2-9 File System Layout for Oracle RAC in SGeSAP CFS Cluster Mount Point Access Point $ORACLE_HOME/oracle/client shared disk and CFS Potential Owning Packages /oracle/ /oraarch /oracle/ /sapraw /oracle/ /saparch /oracle/ /sapbackup /oracle/ /sapcheck /oracle/ /sapreorg /oracle/ /saptrace /oracle/ /sapdata1... /oracle/ /sapdatan /oracle/ /origlogA /oracle/ /origlogB /oracle/ /mirrlogA /oracle/ /mirrlogB tnsnames.ora local disk; access via symbolic link from shared disk MAXDB Storage Considerations SGeSAP supports failover of MAXDB databases as part of SGeSAP NFS cluster option Cluster File Systems can not be used for the MAXDB part of SGeSAP clusters. The considerations given below for MAXDB will also apply to liveCache and SAPDB clusters unless otherwise noticed. MAXDB distinguishes an instance dependant path /sapdb/ and two instance independent paths, called IndepData and IndepPrograms. By default all three point to a directory below /sapdb. The paths can be configured in a configuration file called /var/spool/sql/ini/SAP_DBTech.ini. Depending on the version of the MAXDB database this file contains different sections and settings. A sample SAP_DBTech.ini for a host with a SAPDB 7.4 (LC1) and an APO 3.1 using a SAPDB 7.3 database instance (AP1): [Globals] IndepData=/sapdb/data IndepPrograms=/sapdb/programs [Installations] /sapdb/LC1/db=7.4.2.3,/sapdb/LC1/db /sapdb/AP1/db=7.3.0.15,/sapdb/AP1/db [Databases] .SAPDBLC=/sapdb/LC1/db LC1=/sapdb/LC1/db _SAPDBAP=/sapdb/AP1/db Database Instance Storage Considerations 29 AP1=/sapdb/AP1/db [Runtime] /sapdb/programs/runtime/7240=7.2.4.0, /sapdb/programs/runtime/7250=7.2.5.0, /sapdb/programs/runtime/7300=7.3.0.0, /sapdb/programs/runtime/7301=7.3.1.0, /sapdb/programs/runtime/7401=7.4.1.0, /sapdb/programs/runtime/7402=7.4.2.0, For MAXDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations] , [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.ini in the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.ini, Databases.ini and Runtimes.ini for a host with a liveCache 7.5 (LC2) and an APO 4.1 using a MAXDB 7.5 (AP2):from /var/spool/sql/ini/SAP_DBTech.ini: [Globals] IndepData=/sapdb/data IndepPrograms=/sapdb/programs from /sapdb/data/config/Installations.ini: [Installations] /sapdb/LC2/db=7.5.0.15,/sapdb/LC2/db /sapdb/AP2/db=7.5.0.21,/sapdb/AP2/db from /sapdb/data/config/Databases.ini: [Databases] .M750015=/sapdb/LC2/db LC2=/sapdb/LC2/db. M750021=/sapdb/AP2/db AP2=/sapdb/AP2/db from /sapdb/data/config/Runtimes.ini: [Runtime] /sapdb/programs/runtime/7500=7.5.0.0 NOTE: The[Globals] section is commonly shared between LC1/LC2 and AP1/AP2. This prevents setups that keep the directories of LC1 and AP1 completely separated. The following directories are of special interest: 30 • /sapdb/programs: this can be seen as a central directory with all MAXDB executables. The directory is shared between all MAXDB instances that reside on the same host. It is also possible to share the directory across hosts. But it is not possible to use different executable directories for two MAXDB instances on the same host. Furthermore, it might happen that different SAPDB versions get installed on the same host. The files in /sapdb/programs have to be of the newest version that any MAXDB on the cluster nodes has. Files in /sapdb/programs are downwards compatible. For liveCache 7.4 and APO 3.1 using SAPDB 7.3 this means that within /sapdb/programsthere have to be the SAPDB 7.4 version executables installed. It is important to realize, that also any SAPDB based SAP application server instance will use this path to access the database client files. • /sapdb/data/config: This directory is also shared between instances, though you can find lots of files that are Instance specific in here, e.g. /sapdb/data/config/ .* According to SAP this path setting is static. • /sapdb/data/wrk: The working directory of the main MAXDB processes is also a subdirectory of the IndepData path for non-HA setups. If a SAPDB restarts after a crash, it copies important files from Planning the Storage Layout this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for SAPDB/MAXDB lower that version 7.6, this directory should move with the package. Therefore, SAP provided a way to redefine this path for each SAPBDB/MAXDB individually. SGeSAP expects the work directory to be part of the database package. The mount point moves from /sapdb/data/wrk to /sapdb/data/ /wrk for the clustered setup. This directory should not be mixed up with the directory /sapdb/data/ /db/wrk that might also exist. Core files of the kernel processes are written into the working directory. These core files have file sizes of several Gigabytes. Sufficient free space needs to be configured for the shared logical volume to allow core dumps. NOTE: For MAXDB RDBMS starting with version 7.6 these limitations do not exist any more. The working directory is utilized by all instances (IndepData/wrk) and can be globally shared. ▲ /var/spool/sql: This directory hosts local runtime data of all locally running MAXDB instances. Most of the data in this directory would become meaningless in the context of a different host after failover. The only critical portion that still has to be accessible after failover is the initialization data in /var/spool/sql/ini. This directory is usually very small (< 1Megabyte). With MAXDB and liveCache 7.5 or higher, the only local files are contained in /var/spool/sql/ini, other paths are just links to local runtime data in IndepData path: dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid The links need to exist on every possible failover node the MAXDB or liveCache instance is able to run: ▲ /etc/opt/sdb: Only existent when using MAXDB or liveCache 7.5, needs to be local on each node together with entries in /etc/passwd and /etc/group. Table 2-10 shows the file system layout for SAPDB clusters. NOTE: In HA scenarios, valid for SAPDB/MAXDB versions up to 7.6, the runtime directory /sapdb/data/wrk is configured to be located at /sapdb/ /wrk to support consolidated failover environments with several MAXDB instances. The local directory /sapdb/data/wrk though is still referred to by the VSERVER processes (vserver, niserver), which means VSERVER core dump and log files will be located there. Table 2-10 File System Layout for SAPDB Clusters Mount Point Access Point Potential Owning Packages VG Type /sapdb/ shared disk db database specific /sapdb/ /wrk* dbci /sapdb/ /data jdb /sapdb/ /saplog jdbjci /export/sapdb/programs /export/sapdb/data shared disk db and HA NFS dbci /export/var/spool/sql/ini VG Name Device Minor Number environment specific jdb jdbjci SAPNFS /etc/opt/sdb local none * only valid for SAPDB/MAXDB versions lower than 7.6 Database Instance Storage Considerations 31 NOTE: Using tar or cpio is not a safe method to copy or move directories to shared volumes. In certain circumstances file or ownership permissions may not correctly transported, especially files having the s-bit set: /sapdb/ /db/pgm/lserver and /sapdb/ /db/pgm/dbmsrv. These files are important for the vserver process ownership and they have an impact on starting the SAPDB via adm. These files should retain the same ownership and permission settings after moving them to a shared volume. Database and SAP instances depend on the availability of /sapdb/programs. To minimize dependencies between otherwise unrelated systems, it is strongly recommended to use a dedicated SAPNFS package, especially in case that the cluster has additional SAP application servers installed, more than one SAPDB is installed or the database is configured in a separate DB package. Keeping local copies is possible, though not recommended because of the fact that there are no administration tools that keep track of the consistency between the local copies of these files on all the systems. 32 Planning the Storage Layout 3 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). It is written in the format of a step-by-step guide. It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualize SAP instances manually. If these tasks are already covered by different means, it might be sufficient to quickly check that the requested result is already achieved. For example, if the SAP application component was already installed using a virtual IP address, many steps from the SAP Application Server configuration section can be omitted. Various Serviceguard modules are available with SGeSAP, including • sgesap/sapinstance for clustering of one or more SAP instances, for example SAP Central Instances, System Central Services, Replication Instances, ABAP Application Servers and JAVA Application Servers of a single SAP system • sgesap/dbinstance for ORACLE or MAXDB RDBMS • sgesap/sapextinstance for the handling of non-clustered SAP software • sgesap/sapinfra for clustering of SAP infrastructure software The generic SGeSAP package module can be referred to as sgesap/sapinstance. It can be used to add one or more Netweaver instances of a single SAP system to a Serviceguard package. sgesap/scs can be used to primarily add only the Netweaver software single points of failure, ie. Central Instances or System Central Service Instances. sgesap/ers can be used to primarily add only the Enqueue Replication Service instances that correspond to System Central Service instances. Various SGeSAP legacy package types are supported, including • SAP Central Instances and ABAP System Central Services (ci), • Java System Central Services (jci), • Database Instances for ABAP components (db), • Database Instances for J2EE components (jdb), • Replicated Enqueue for both ABAP and JAVA System Central Services ([a]rep), • ABAP Dialog Instances and Application Servers (d), • JAVA Dialog Instances and Application Servers (jd) • Highly Available NFS (sapnfs) and • Combinations of these package types (dbci, jdbjci, dbcijci,...). Refer to the support matrix published in the release notes of SGeSAP to get details whether your SAP application component is supported in Serviceguard clusters on HP-UX. liveCache legacy packages require legacy package type lc. liveCache modules are designed with module sgesap/livecache. MDM can be clustered with a single legacy package mgroup, or with a set of five packages mmaster, mds, mdis, mdss, mdb. Refer to chapter 4 for Livecache package types, and Chapter 5 for MDM legacy package types. In order to ensure a consistent naming scheme in the following installation and configuration steps, the SGeSAP package names are assembled from the package type and SAP System ID: = or, in case of ambiguous results = Examples: dbC11, ciC11, dbciC11, dbcijciC11, jdbC11, jciC11, jdbjciC11, d01C11, d02C11, ers03C11 There is one exception in the naming convention, which concerns the dedicated NFS package sapnfs that might serve more than one SAP SID. SGeSAP modules are implemented to work independent of the used package naming. For these packages, the above naming scheme is a recommendation. For a description and combination restrictions on those packages, refer to chapter 1 - Designing SGeSAP Cluster Scenarios. 33 The legacy package installation steps cover HP-UX 11i v1, HP-UX 11i v2 and HP-UX 11i v3 using Serviceguard 11.16 or higher. Modular packages can be used with HP-UX 11i v2 and HP-UX 11i v3 using Serviceguard 11.18 or higher. The package creation process is split into the following logical tasks: • SAP Preparation • HP-UX Configuration • Modular Package Configuration • Legacy Package Configuration • HA NFS Toolkit Configuration • Auto FS Configuration • Database Configuration • SAP Application Server Configuration The tasks are presented as a sequence of steps. Each installation step is accompanied by a unique number of the format XXnnn, where nnn are incrementing values and XX indicates the step relationship, as follows: • ISnnn—Installation Steps mandatory for all SAP packages • OSnnn—Optional Steps • LSnnn—Installation Steps mandatory for legacy packages only • MSnnn—Installation Steps mandatory for module based packages only • ORnnn—ORacle database only steps • SDnnn—SAPDB/MAXDB or LC database only steps • REnnn—Replicated Enqueue only steps If you need assistance during the installation process and need to contact HP support, you can refer to the installation step numbers to specify your issue. Also include the version of this document. The installation step numbers were reorganized for this document version and don't resemble to earlier releases. Whenever appropriate, HP-UX sample commands are given to guide you through the integration process. It is assumed that hardware as well as the operating system and Serviceguard are already installed properly on all cluster hosts. Sometimes a condition is specified with the installation step. Follow the information presented only if the condition is true for your situation. NOTE: For installation steps in this chapter that require the adjustment of SAP specific parameter in order to run the SAP system in a switchover environment example values are given. These values are for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP's latest recommendation. Whenever possible the SAP OSS note number is given. The SAP Application Server installation types are ABAP-only, Java-only and Add-in. The latter includes both the ABAP and the Java stack. In principle all SAP cluster installations look very similar. Older SAP systems get installed in the same way as they would without a cluster. Cluster conversion takes place afterwards and includes a set of manual steps. Some of these steps can be omitted since the introduction of high availability installation options to the SAP installer SAPINST. In this case, a part of the cluster configuration is done prior to the SAP installation as such. The SAP Instances can then be installed into a virtualized environment, which obsoletes the SAP Application Server Configuration steps that usually concluded a manual cluster conversion. Therefore, it is important to first decide which kind of SAP installation is intended. The installation of a SAP High Availability System was introduced with Netweaver 2004s. For Netweaver 2004 JAVA-only installations there is a similar High Availability Option for SAPINST. All older SAP kernels need to be clustered manually. The SAP preparation chapter covers all three cases. It also describes how Enqueue Replication can be activated for use with SGeSAP. The exact steps for that also depend on the SAP release that gets used. Other differences in cluster layout derive from the usage of HP Serviceguard Cluster Filesystem. Alternative setups use HA NFS packages and automounter technology to ensure cluster-wide access to filesystems. Finally, the underlying database of course also causes slightly different installation steps. 34 Step-by-Step Cluster Conversion SAP Preparation This section covers the SAP specific preparation, installation and configuration before creating a high available SAP System landscape. This includes the following logical tasks: • SAP Pre-Installation Considerations • Replicated Enqueue Conversion SAP Pre-Installation Considerations This section gives additional information that help with the task of performing SAP installations in HP Serviceguard clusters. It is not intended to replace any SAP installation manual. SAP installation instructions provide complementary information and should be consulted in addition to this. SAP Netweaver 2004s introduced High Availability installation options. In combination with SGeSAP they can also be activated to guide the clustering of SAP JAVA-only applications based on kernel 6.40. The SAP Enterprise Portal 6.0 belongs to this group of applications. All SAP components that are based on earlier SAP technology should to be installed in the standard way as described by the SAP Installation Guides. They can be clustered after the initial installation. In any case, it makes sense to already set up required file systems as documented in Chapter 2 'Planning the Storage Layout' to prevent avoidable conversion activities in a later stage. The following paragraphs are divided into these installation options: • SAP Netweaver High Availability • Generic SAP Installation SAP Netweaver High Availability The SAP Netweaver High Availability options were officially introduced with Netweaver 2004s technology. They are based on the SAP Application Server 7.x. The SAPINST installer for SAP kernel 7.x offers the following installation options out of the box: • Central System • Distributed System • High Availability System The Central System and Distributed System installations build a traditional SAP landscape. They will install a database and a monolithic Central Instance. Exceptions are Java-only based installations. NOTE: For Java-only based installations the only possible installation option is a High Availability System installation. It is strongly recommended to use the "High Availability System" option for all new installations that are meant to be used with SGeSAP. A SAP Application Server 7.0 may consist of any combination of the following components: • Java System Central Services Instance (SCS) [Java Message and Enqueue Server] • ABAP System Central Services Instance (ASCS) [ABAP Message and Enqueue Server] • Dialog Instance(s) (D, JD, DVEBMGS) [ABAP and/or Java stack] The potential SPOFs are the SCS, ASCS and DB instances. The Dialog instance can be installed redundantly on nodes inside or outside the cluster. The ABAP DVEBMGS instance for Netweaver 2004s (or higher) is similar to a simple Dialog Instance, except that it is preconfigured to contain the services Batch, Update, Spool and Gateway. For JAVA, a Central Instance runs the Java Software Deployment Manager (SDM). These services though can be also configured redundantly with other Dialog Instances. The SAP Netweaver CDs/DVDs must be available either as physical copies or images on the local or shared file system for the duration of the installation. As preparation, simple Serviceguard packages for the clustered instances have to be created. They provide the virtual IP addresses that are required during installation. The package(s) will later on be altered to utilize SAP Preparation 35 SGeSAP functionality. It will be more convenient to do this once the SAP installation has taken place. The following steps are performed as root user to prepare the cluster for the SAP installation. Preparation steps MS12xx should be followed to create module-based packages. Preparation steps LS12xx should be followed to create legacy packages. Preparation Step: MS1200 Create a tentative Serviceguard package configuration for one or more SAP instances. The first thing required is a SGeSAP configuration file that can be used to define the SAP Serviceguard failover package. It is usually created with the following command: cmmakepkg -m sgesap/sapinstance [-m ...] >/sap.config For a database package, the module sgesap/dbinstance can be specified. NOTE: SGeSAP modules can be referred to by using SGeSAP legacy package types. Covered package types include: ci, scs and ascs, rep and arep. Specification examples for SGeSAP modules: cmmakepkg -m sgesap/db -m sgesap/ci creates a single package configuration file for database and SAP instance(s) (one package concept) cmmakepkg -m sgesap/db cmmakepkg -m sgesap/ci separates database and SAP instances into two package configuration files (two package concept) cmmakepkg -m sgesap/scs cmmakepkg -m sgesap/ers separates System Central Service and Enqueue Replication into two packages cmmakepkg -m sgesap/scs -m sgesap/ers would immediately issue an error message, because System Central Services and Enqueue Replication cannot share the same package. Preparation Step: MS1210 The created configuration file needs to be edited. Refer to the Managing Serviceguard user's guide for general information about the generic file content. A minimum configuration will do for the purpose of supporting the SAP installation. At least the following parameters should be edited: package_name, node_name, ip_address and monitored_subnet. The package_name can be chosen freely. It is often a good approach to stick to the naming convention that combines the name of the SAP Instance type and the SAP System ID. Examples: dbciC11, scsC11. Specify node_name entries for all hosts on which the package should be able to run. There is a section that defines a virtual IP address array. All virtual IP addresses specified here will become associated to the SAP and database instances that are going to be installed. Specify at least one virtual IP. Specify subnets to be monitored in the monitored_subnet section. The only SAP specific parameters that needs to be set is the planned SAP system id sap_system. Preparation Step: MS1220 Create a debug file on the system on which the installation takes place. The debug file allows manual SAP instance shutdown and startup operations during installation. touch /var/adm/cmcluster/debug_ It does not matter, if the system is meant to be run in a multi-tier fashion that separates the database from the ASCS instance by running them on different cluster nodes during normal operation. For convenience, all installation steps should be done on a single machine. Due to the virtualization, it is easy to separate the instances later on. Preparation Step: MS1230 The configuration file needs to be applied and the packages started. 36 Step-by-Step Cluster Conversion This step assumes that the cluster as such is already configured and started. Please refer to the Managing Serviceguard user's guide if more details are required. cmapplyconf -P ./sap.config cmrunpkg -n All virtual IP addresses should now be configured. A ping command should reveal that they respond to communication requests. Preparation Step: IS1300 Before installing the SAP Application Server 7.0 some OS-specific parameters have to be adjusted. Verify or modify HP-UX kernel parameters as recommended by the SAP Master Guide Part 1. Be sure to propagate changes to all nodes in the cluster. The SAP Installer checks the OS parameter settings with a tool called "Prerequisite Checker" and stops the installer when the requirements are not met. Check the installed Java SDK with the requirement as of the SAP Master Guide Part 1. Be sure to install the required SDK on all nodes in the cluster. In case the J2EE engine uses security features that utilize the need for the Java Cryptographic Toolkit, it has to be downloaded both from the Sun website as well as from the SAP Service Marketplace. Preparation Step: IS1320 Before invoking the SAP installer, some additional requirements have to be met that are described in the SAP installation documentation. These are not specific to cluster implementations and apply to any SAP installation. They usually involve the setting of environment variables for SAPINST, like the DISPLAY variable, the temporary installation directory TMP, etc. Installation Step: IS1330 The installation is done using the virtual IP provided by the Serviceguard package. SAPINST can be invoked with a special parameter called SAPINST_USE_HOSTNAME. This prevents the installer routines from comparing the physical hostname with the virtual address and drawing wrong conclusions. The installation of the entire SAP Application Server 7.0 will happen in several steps, depending on the installation type. Each time a different virtual hostname can be provided. • First, the SAP System Central Services Component (SCS and/or ASCS) will be installed using the virtual IP contained in the corresponding Serviceguard package. • Then, the database instance will be set up and installed on the relocatable IP address of the DB package. • After that, the Central Instance and all required Dialog Instances are established. Virtual IPs can be used here, too. It is recommended to do this, because it preserves the flexibility to move instances SAP Preparation 37 between physical machines. HP does not support a conversion to a virtual IP after the initial installation on a physical hostname for SAP JAVA engines. The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument. cd /IM _OS/SAPINST/UNIX/ /sapinst \ SAPINST_USE_HOSTNAME= NOTE: The SAPINST_USE_HOSTNAME parameter is not mentioned in SAP installation documents for the SAP JAVA engine 6.40, but it is officially supported. The installer will show a warning message that has to be confirmed. When starting the SAPINST installer for kernel 6.40, the first screen shows installation options that are generated from an XML file called product.catalog located at /IM The standard catalog file product.catalog has to be either: ▲ replaced by product_ha.catalog in the same directory on a local copy of the DVD or ▲ the file product_ha.catalog can be passed as an argument to the SAPINST installer It is recommended to pass the catalog as an argument to SAPINST. The XML file that is meant to be used with SGeSAP clusters is included on the installation DVD/CD's distributed by SAP. The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument. cd_OS/SAPINST/UNIX/ . /IM _OS/SAPINST/UNIX/ /sapinst \ /IM _OS/SAPINST/UNIX/ \ product_ha.catalog\ SAPINST_USE_HOSTNAME=reloc_jci> The SAP installer should now be starting. Afterwards, the virtualized installation is completed, but the cluster still needs to be configured. The instances are now able to run on the installation host, provided the corresponding Serviceguard packages got started up front. It is not yet possible to move instances to other nodes, monitor the instances or trigger automated failovers. Do not shut down the Serviceguard packages while the instances are running. It is possible to continue installing content for the SAP J2EE Engine, before the cluster conversion as described in the sections below gets performed. Replicated Enqueue Conversion This section describes how a SAP ABAP Central Instance DVEBMGS can be converted to use the Enqueue Replication feature for seamless failover of the Enqueue Service. The whole section can be skipped if Enqueue Replication is not going to be used. It can also be skipped in case Replicated Enqueue is already installed. The proceeding manual conversion steps can be done for SAP applications that are based on ABAP kernel 4.6D and 6.40. These kernels are supported on SGeSAP with Replicated Enqueue, and SAP 38 Step-by-Step Cluster Conversion is not delivering installation routines that install Replicated Enqueue configurations for these releases, so the manual conversion steps become necessary. The 4.6D kernel does require some kernel executables of the 6.40 kernel to be added. If the SAP installation was done for Netweaver 2004 Java-only, Netweaver 2004s, or a newer release as documented in section 'SAP Installation Considerations', only the second part 'Creation of Replication Instance' is required. The split of the Central Instance is then already effective and a [A]SCS instance was created during installation. In this case it is sufficient to ensure that the [A]SCS startup profile does not use local Restart for the enqueue process and that the instance profile contains recommended replication parameter settings, eg: enque/server/internal_replication = true enque/server/replication = true enque/server/threadcount = 1 enque/enrep/keepalive_count = 0 enque/process_location = local enque/table_size = 4096 ipc/shm_psize_34 = 0 Using Replicated Enqueue heavily changes the SAP instance landscape and increases the resource demand: Two additional SAP instances will be generated during the splitting procedure. There is a requirement for at least one additional unique SAP System ID. Unique means, that the ID is not in use by any other SAP instance of the cluster. There is also a requirement for one or two additional shared LUNs on the SAN and one or two additional virtual IP addresses for each subnet. The LUNs need to have the size that is required for a SAP Instance directory of the targeted kernel release. Splitting an ABAP Central Instance The SPOFs of the DVEBMGS instance will be isolated in a new instance called ABAP System Central Services Instance ASCS . This instance will replace DVEBMGS for the ci package type. The remaining parts of the Central Instance can then be configured as Dialog Instance D . The ASCS Instance should then only be started and stopped with the cluster package startup and halt commands instead of using manual shell operations. NOTE: The Dialog Instance D that results from the conversion also represents one or more Single Points of Failure for many scenarios. In these cases,D should also be clustered with SGeSAP. It is not even unusual to combine ASCS andD in a single SGeSAP package. It makes sense, even though the resulting package contains the same components like a traditional package for DVEBMGS would. Seamless failover with Replicated Enqueue can not be achieved without splitting up DVEBMGS into two instances. Logon as root to the server on which the Central Instance DVEBMGS was installed. Replicated Enqueue Conversion: RE010 Create a new mountpoint: su - adm mkdir /usr/sap/ /ASCS Replicated Enqueue Conversion: RE020 A volume group needs to be created for the ASCS instance. The physical device(s) should be created as LUN(s) on shared storage. Storage connectivity is required from all nodes of the cluster that should run the ASCS. For the volume group, one logical volume should get configured. For the required size, refer to the capacity consumption of /usr/sap/ /DVEBMGS . This should provide a conservative upper limit that leaves reasonable headroom. Mount the new logical volume to the mountpoint created in RE010. Replicated Enqueue Conversion: RE030 Create instance subdirectories in the mounted logical volume. SAP Preparation 39 They will be switched between the cluster nodes later. su - adm cd /usr/sap/ /ASCS mkdir data log sec work Replicated Enqueue Conversion: RE040 If the used SAP kernel has a release older than 6.40... Download the executables for the Standalone Enqueue server from the SAP Service Marketplace and copy them to /sapmnt/ /exe. There should be at least three files that are added/replaced: enserver, enrepserver and ensmon. Make sure these files are part of the sapcpe mechanism for local executable creation (see Chapter 2). This step will create a version mix for the executables. The Standalone Enqueue executables get taken from the 6.40 kernel. Special caution has to be taken to not replace them with older releases later on. This might happen by accident when kernel patches get applied. Replicated Enqueue Conversion: RE050 Create instance profile and startup profile for the ASCS Instance. These profiles get created as adm in the NFS-shared /sapmnt/ /profile directory. Here is an example template for the instance profile _ASCS _ : #---------------------------------# general settings #---------------------------------SAPSYSTEMNAME= INSTANCE_NAME=ASCS SAPSYSTEM= SAPLOCALHOST= SAPLOCALHOSTFULL= . #---------------------------------# enqueueserver settings #---------------------------------enque/server/internal_replication = true enque/server/replication = true enque/server/threadcount = 1 enque/enrep/keepalive_count = 0 enque/process_location = local enque/table_size = 4096 ipc/shm_psize_34 = 0 #---------------------------------# messageserver settings #---------------------------------rdisp/mshost= #---------------------------------# prevent shmem pool creation #---------------------------------ipc/shm_psize_16 = 0 ipc/shm_psize_24 = 0 ipc/shm_psize_34 = 0 ipc/shm_psize_66 = 0 This template shows the minimum settings. Scan the old _DVEBMGS _ profile to see whether there are additional parameters that apply to either the Enqueue Service or the Message Service. Individual decisions need to be made whether they should be moved to the new profile. Here is an example template for the startup profile START_ASCS _ #----------------------------------------------------------------------SAPSYSTEMNAME = 40 Step-by-Step Cluster Conversion INSTANCE_NAME =ASCS ----------------------------------------------------------------------# start SCSA handling #----------------------------------------------------------------------Execute_00 =local $(DIR_EXECUTABLE)/sapmscsa –n pf=$(DIR_PROFILE)/ _ASCS _ #----------------------------------------------------------------------# start message server #----------------------------------------------------------------------MS =ms.sap _ASCS Execute_01 =local rm -f $(_MS) Execute_02 =local ln -s -f $(DIR_EXECUTABLE)/msg_server $(_MS) Start_Program_01 =local $(_MS) pf=$(DIR_PROFILE)/ _ASCS _ #----------------------------------------------------------------------# start syslog collector daemon #----------------------------------------------------------------------CO =co.sap _ASCS Execute_03 =local rm -f $(_CO) Execute_04 =local ln -s -f $(DIR_EXECUTABLE)/rslgcoll $(_CO) Start_Program_02 =local $(_CO) -F pf=$(DIR_PROFILE)/ _ASCS _ #----------------------------------------------------------------------# start enqueue server #----------------------------------------------------------------------EN = en.sap _ASCS Execute_05 = local rm -f $(_EN) Execute_06 = local ln -s -f $(DIR_EXECUTABLE)/enserver $(_EN) Start_Program_03 = local $(_EN) pf=$(DIR_PROFILE)/ _ASCS _ #----------------------------------------------------------------------# start syslog send daemon #----------------------------------------------------------------------SE =se.sap _ASCS Execute_07 =local rm -f $(_SE) Execute_08 =local ln -s -f $(DIR_EXECUTABLE)/rslgsend $(_SE) Start_Program_04 =local $(_SE) -F pf=$(DIR_PROFILE)/ _ASCS _ #----------------------------------------------------------------------Replicated Enqueue Conversion: RE060 Adopt instance profile and startup profile for the DVEBMGS instance. The goal of this step is to strip the Enqueue and Message Server entries away and create a standard Dialog Instance. A second alteration is the replacement of the Instance Number of the Central Instance which now belongs to ASCS and AREP. The new Dialog Instance profile _DVEBMGS _ differs from the original _DVEBMGS _ profile in several ways: All configuration entries for Message and Enqueue Service need to be deleted, e.g. rdisp/wp_no_enq=1 must be removed. Several logical names and address references need to reflect a different relocatable address and a different Instance Number. For example: SAPSYSTEM= rdisp/vbname = _ _ SAPLOCALHOST= SAPLOCALHOSTFULL= .domain The exact changes depend on the individual appearance of the file for each installation. The startup profile is also individual, but usually can be created similar to the default startup profile of any Dialog Instance. Here is an example template for a startup profile START_D _ : SAP Preparation 41 Scan the old _DVEBMGS _ profile to see whether there are additional parameters that apply to either the Enqueue Service or the Message Service. Individual decisions need to be made whether they should be moved to the new profile. Here is an example template for the startup profile START_DVEBMGS _ #----------------------------------------------------------------------SAPSYSTEMNAME = INSTANCE_NAME = DVEBMGS #----------------------------------------------------------------------# start SCSA #----------------------------------------------------------------------Execute_00 = local $(DIR_EXECUTABLE)/sapmscsa -n pf=$(DIR_PROFILE)/ _DVEBMGS _ #----------------------------------------------------------------------# start application server #----------------------------------------------------------------------_DW = dw.sap _DVEBMGS