Oracle IPM High Availability Architecture

Configuring High Availability for Enterprise Content Management 11-5 ■ Oracle IPM requires a database to store its configuration. The database is initially create via RCU and is managed through WebLogic Server JDBC data sources. The database is accessed via TopLink. ■ Oracle IPM leverages a variety of Oracle and Java technologies, including JAX-WS, JAX-B, ADF, TopLink, and JMS. These are included with the installation and do not require external configuration. ■ Oracle IPM provides Mbeans for configuration. These are available through WLST and the EM Mbean browser. Oracle IPM provides a few custom WLST commands. ■ Oracle IPM depends on the existence of an Oracle UCM repository. The following clients are dependent upon Oracle IPM: ■ The Oracle IPM UI is built on its public toolkit the Oracle IPM API. The UI and core API are distributed in the same EAR file. ■ Oracle IPM provides Web services for integrations with other products, and provides a set of Java classes that wrap those Web services as a Java convenience for integrating with other products. ■ Oracle IPM provides a URL toolkit to facilitate other applications interacting with Oracle IPM searches and content. ■ Oracle IPM provides a REST toolkit for referencing individual pages of documents for web presentation. Clients connect to Oracle IPM as follows: ■ Either through JAX-WS for the Web services andor Java client. These connections are stateless and perform a single function. ■ Through HTTP for access to URL and REST tools: – REST requests are stateless and perform a single function. – URL tools log into the Oracle IPM UI and provide a more stateful experience with the relevant UI components.

11.1.1.1.6 Oracle IPM Log File Location OracleIPM is a JEE application deployed on

WebLogic Server. All log messages are logged in server log files of the WebLogic Server that the application is deployed on. The default location of the diagnostics log file is: WL_HOME user_projectsdomainsdomainNameserversserverNamelogs serverName -diagnostic.log You can use Oracle Enterprise Manager to easily search and parse these log files.

11.1.2 Oracle IPM High Availability Concepts

This section provides conceptual information about using Oracle IPM in a high availability two-node cluster.

11.1.2.1 Oracle IPM High Availability Architecture

Figure 11–2 shows an Oracle IPM high availability architecture: 11-6 Oracle Fusion Middleware High Availability Guide Figure 11–2 Oracle IPM High Availability Architecture Diagram Oracle IPM can be configured in a standard two node active-active high availability configuration. Although Oracle IPMs UI layer will not fail over between active servers, its other background processes will. In the Oracle IPM high availability configuration shown in Figure 11–2 : ■ The Oracle IPM nodes are exact replicas of each other. All the nodes in a high availability configuration perform the exact same services and are configured ECMHOST1 ECMHOST2 UCM Cluster TLogs IPM_Cluster Admin Server Admin Server Admin Server Admin Server VIP 0 Firewall JMS Input Files RAC IPM Configuration Repository WLS_UCM2 Repository WLS_UCM2 Repository WLS_UCM1 Repository WLS_UCM1 UCM Firewall Internet Load Balancer WEBHOST2 OHS WEBHOST1 OHS WLS_IPM1 WLS_IPM1 VIP 1 WLS_IPM2 WLS_IPM2 VIP 2 Configuring High Availability for Enterprise Content Management 11-7 through the centralized configuration database, from which all servers pull their configuration. ■ A load balancing router with sticky session routing must front-end the Oracle IPM nodes. Oracle HTTP Server can be used for this purpose. ■ Oracle IPM can run both within a WebLogic Server cluster or without. ■ All the Oracle IPM instances must share the same database connectivity. This can be Oracle RAC for increased high availability potential. Oracle IPM uses TopLink for database actions. ■ Oracle IPM requires one common shared directory from which its Input Agent draws inbound data files staged for ingestion. Once an Input Agent pulls input definition files, processing of the input file and images associated will stay isolated to that WebLogic Server instance.

11.1.2.1.1 Starting and Stopping the Cluster Oracle IPMs agents Input Agent, BPEL

Agent are started as integral part of the Oracle IPM application residing in the WebLogic Server in the cluster. Based on outstanding work sitting in the JMS queues which are persisted and preserved across failures, they begin processing immediately. The Input Agent also looks for work in its inbound file directory. If there are files to be processed, they are pushed to the corresponding JMS queues and a server in the cluster will consume it and process the associated images. The numbers of threads dedicated to the BPEL Agent and Input Agent are controllable though the corresponding WebLogic Server workload managers. When the servers in a cluster are stopped, the agents finish their activity. The BPEL Agent has a short run cycle and all work is likely completed before shutdown. In flight BPEL invocations are retried three times and JMS logs preserve pending operations. The Input Agent can process files of considerable size. A large input file can take hours to process. The amount of work pending after a stop is preserved in the associated JMS persistence stores. When the server where the Input Agent is hosted is restarted, the agent resumes processing where it left off. Processing of image files continues in the server that initially consumed the corresponding input file after restart or server migration.

11.1.2.1.2 Cluster-Wide Configuration Changes Oracle IPM configuration is stored in the

Oracle IPM database and shared by all the Oracle IPM instances in the cluster. The number of threads for the BPEL Agent and Input Agent for a particular Oracle IPM instance is a WebLogic Server property stored as configuration of the Oracle IPM application and is controlled through the Administration Console or with WLST commands.

11.1.2.2 Protection from Failures and Expected Behaviors