Workspace High Availability Architecture

Configuring High Availability for Oracle Business Intelligence and EPM 15-29

15.1.7.1.3 Process Lifecycle EPM Workspace is managed as a standard J2EE Web

application deployed to the WebLogic application server. It is managed, started, and terminated by its application server.

15.1.7.1.4 Request Flow Requests are made to EPM Workspace by means of the HTTP

proxy, which is the only means for end-users to access EPM Workspace. EPM Workspace does not make requests against other Web applications or databases other than its repository database and the EPM Registry database.

15.1.7.1.5 External Dependencies EPM Workspace relies upon its repository database

and the Hyperion Registry; both are in the same schema. Most EPM products rely on EPM Workspace to provide initial authentication and a containing user interface.

15.1.7.1.6 Configuration Artifacts The entire EPM Workspace configuration is stored in

the Hyperion Registry. Individual user preferences are stored in the EPM Workspace repository.

15.1.7.1.7 Deployment Artifacts EPM Workspace has no deployment artifacts other than

the JDBC data source.

15.1.7.1.8 Log Files Go to the logsworkspace subdirectory in the managed server

domain home to view workspace.log and Framework.log.

15.1.8 Oracle EPM Workspace High Availability Concepts

This section provides conceptual information about using EPM Workspace in a high availability environment.

15.1.8.1 Workspace High Availability Architecture

You can cluster both the HTTP proxy and the J2EE deployment of EPM Workspace with standard WebLogic Server clustering mechanisms and standard use of the application server deployment and data sources. EPM Workspace high availability requires sticky sessions; session replication and failover are not supported. All clustered versions must be identical. All updates to cluster members must be made at the same time. The load-balancer hardware or HTTP proxy allocates requests in a round-robin rotation.Requests fail if the repository database or Hyperion Registry is lost. When a request fails, subsequent requests attempt to initialize EPM Workspace and restore its database connectivity. Database use increases markedly when an EPM Workspace instance is started and initialized. If there are many deployments, the cluster startup may be faster if you stagger startup of the instances.

15.1.8.1.1 Shared Files and Directories EPM Workspace has no shared files or directories

except for Oracle JRF and EPM common files.

15.1.8.1.2 Cluster-Wide Configuration Changes Product binaries and deployment

information generated by WebLogic Server are stored in the file system. The configuration is stored in the Hyperion Registry. All Workspace instances share the same configuration via the Hyperion Registry. You can change the configuration through the EPM Workspace administrative user interface or with explicit updates to the registry outside EPM Workspace. 15-30 Oracle Fusion Middleware High Availability Guide Normally, the file system artifact changes only for a patch or for reconfiguration of the deployment or clustering that requires Oracle WebLogic Server file changes. Configuration changes do not require files system updates. Cluster members see the updated configuration at the next reinitialization. Each cluster member must be directed to reinitialize itself; the Hyperion Registry is not polled.

15.1.8.2 Protection from Failures and Expected Behaviors