Oracle UCM High Availability Architecture

Configuring High Availability for Enterprise Content Management 11-13 The following clients depend on Oracle UCM: ■ Oracle Universal Records Management Oracle URM ■ Oracle Imaging and Process Management Oracle IPM ■ Oracle WebCenter The connection from clients is short-lived, and is only needed for the duration of sessionless service. Clients can connect to Oracle UCM using the HTTP, SOAPWeb Services, JCR, and VCR protocols.

11.2.1.1.7 Oracle UCM Log File Locations Oracle UCM is a J2EE application deployed on

WebLogic Server. Log messages are logged in the server log file of the WebLogic Server that the application is deployed on. The default location of the server log is: WL_HOME user_projectsdomainsdomainNameserversserverNamelogs serverName -diagnostic.log Oracle UCM can also keep logs in: WebLayoutDir groupssecurelogs Oracle UCM trace files can be configured to be captured and stored in: IntraDocDir datatrace To view log files using the Oracle UCM GUI, choose the UCM menu and then choose Administration Logs. To view trace files using the Oracle UCM GUI, choose the UCM menu and then choose System Audit Information . Click the View Server Output and View Trapped Output links to view current tracing and captured tracing.

11.2.2 Oracle UCM High Availability Concepts

This section provides conceptual information about using Oracle UCM in a high availability two-node cluster.

11.2.2.1 Oracle UCM High Availability Architecture

Figure 11–4 shows a two-node active-active Oracle Universal Content Management cluster. 11-14 Oracle Fusion Middleware High Availability Guide Figure 11–4 Oracle Universal Content Management Two-Node Cluster In the Oracle UCM high availability configuration shown in Figure 11–4 : ■ Each node runs independently against the shared file system, the same database schema, and the same search indexes. Each client request is served completely by a single node. ■ Oracle UCM can run with a WebLogic Server cluster or an external load balancer. Oracle UCM is also transparent to Oracle RAC and multi data source configuration. ■ Oracle UCM nodes can be scaled independently, and there is limited inter-node communication. Nodes communicate via writing and reading to a shared file system. This shared file system must support synchronized write operations. Internet Load Balancer ECMHOST1 ECMHOST2 Firewall Firewall Cluster_UCM WLS_UCM1 Admin Server Admin Server UCM WLS_UCM2 UCM WEBHOST1 OHS UCM Config Files Shared Disk RAC WEBHOST2 OHS Configuring High Availability for Enterprise Content Management 11-15

11.2.2.1.1 Starting and Stopping the Cluster When the cluster starts, each Oracle UCM

node goes through its normal initialization sequence, which parses, prepares, and caches its resources, prepares its connections, and so on. If a node is part of a cluster then in-memory replication is initiated if other cluster members are available. One or all members of the cluster can be started at any one time. Shutting down a cluster member will only involve that cluster member being unavailable to service requests. When a server is shut down gracefully, it will finish processing current requests, signal its unavailability, and then release all shared resources and close its file and database connections. All session state will be replicated as well, allowing users originally connected to this cluster member to fail over to another member of the cluster.

11.2.2.1.2 Cluster-Wide Configuration Changes At the cluster level, new Oracle UCM

features or customization of behaviors can be introduced through Oracle UCM internal components. Nodes need to be restarted to pick up these new changes. For example, the metadata model can change system wide. For instance, a metadata field can be added, modified, or deleted. The system behavior that was driven by these metadata fields can be changed. This change would automatically be picked up by cluster nodes through notification between nodes. For changes to occur on each cluster member, the first node needs to have shared folders configured. As long as all the shared folders have the same mount point on each cluster node, the other nodes do not need do make manual changes using WebLogic Server packunpack.

11.2.2.2 Oracle UCM and Inbound Refinery High Availability Architecture