Oracle Essbase High Availability Architecture Protection from Failures and Expected Behaviors

Configuring High Availability for Oracle Business Intelligence and EPM 15-23

15.1.3.1.7 Deployment Artifacts Deployment artifacts required for correct setup of

Essbase include JDBC and ODBC, reg.properties, security artifacts, and so forth. Deployment artifacts must be passed to OPMN, which can then provision Essbase.

15.1.3.1.8 Log Files Diagnostic log files reside, by default, under ORACLE_

INSTANCEdiagnostics. Essbase Agent writes messages to Essbase.log: ■ Default location: ORACLE_INSTANCEdiagnosticslogsEssbasename of the Essbase instance essbase ■ Configured location: HYPERION_LOGHOMEessbase Essbase application server writes messages to application server log files: ■ Default location: ORACLE_INSTANCEdiagnosticslogsEssbasename of the Essbase instance ■ Configured location: HYPERION_LOGHOMEessbaseapp OPMN creates the Essbase console log, located by default in ORACLE_ INSTANCEdiagnosticslogsEssbasename of Essbase instance. The Essbase Forward Ping log, EssbasePing.log, is located by default in ORACLE_ INSTANCEdiagnosticslogsOPMNopmn. Lease manager logs are created only in a failover configuration. See Section 15.1.4.2, Protection from Failures and Expected Behaviors. The lease manager modules within Essbase Agent and Essbase Server write messages to their own logs: ■ Default location: ORACLE_INSTANCEdiagnosticslogsEssbaseias component nameessbase ■ Configured location: HYPERION_LOGHOMEessbase

15.1.4 Oracle Essbase High Availability Concepts

This section provides conceptual information about using Essbase in a high availability environment.

15.1.4.1 Oracle Essbase High Availability Architecture

Figure 15–2 shows Essbase in a highly available Oracle BI EE deployment.

15.1.4.1.1 Shared Files and Directories ARBORPATH, which includes configuration files,

security files, and all applications and corresponding databases, is on a shared disk. Instance-specific files, binaries, and log files are on a local disk by default but can be shared. For example, you could put log files on a shared disk by changing a configuration setting in opmn.xml. OPMN configuration file, opmn.xml ORACLE_ INSTANCEconfigOPMNopmn Lists all the managed components and the environment variables to be passed to each managed component. Table 15–2 Cont. Essbase Configuration Artifacts File Location Description 15-24 Oracle Fusion Middleware High Availability Guide

15.1.4.1.2 Cluster-Wide Configuration Changes Shared configuration information is in

essbase.cfg, which is on a shared disk. For example, changes in timeout settings for lease acquisition or renewal are made in essbase.cfg on the shared disk.

15.1.4.2 Protection from Failures and Expected Behaviors

For failover, you can configure Essbase in a two-node cluster. Included in a failover cluster setup: ■ Two nodes One node is active at all times and one node is passive. ■ A shared disk between the nodes The shared disk stores applications, databases, product configuration, and everything that is placed under ARBORPATH. ■ Product binaries and others in ESSBASEPATH Each node can have the binaries on a direct attached storage disk. ■ Registry and lease databases These databases, which are critical, must be on redundant storage devices. A leasing mechanism ensures that one and only one Essbase Agent and its set of application servers are active. One and only one lease must be active on a resource at any time; the owner of the active lease has ownership rights to the resource. This leasing scheme is implemented using the lease tables in the database. The lease tables are seeded by Repository Configuration Utility RCU. To exploit failover functionality, Java API clients must log on using APS Servlet endpoint URL. The URL would specify the cluster name of the failover cluster; for example, http:WEBHOST1:7777apsEssbase?clusterName=cludemo Provider Services resolves the cluster name to the host name and port of the active node in the cluster. Session failover is not supported. If service failover occurs, the client receives an error message and must log on again.

15.1.4.3 Troubleshooting