Handling Uneven Message Loads andor Message Processing Delays

10-10 Programming Message-Driven Beans for Oracle WebLogic Server

10.3.7 Handling Uneven Message Loads andor Message Processing Delays

For applications with uneven message loads or unanticipated message processing delays, you may want to consider the following: ■ For local distributed topics when the topic distribution mode is One-Copy-Per-Server or One-Copy-Per-Application, tune distributedDestinationConnection to EveryMember. While the LocalOnly option can yield significantly better performance since it avoids unnecessary network traffic, there are use cases where the LocalOnly optimization network savings does not outweigh the benefit of distributing message processing for unbalanced queue loads as evenly as possible across all JVMs in a cluster. This is especially a concern when message backlogs develop unevenly throughout the cluster and message processing is expensive. In these use cases, the LocalOnly configuration should be avoided in favor of the EveryMember scenario with durable subscribers. ■ Use a PDT instead of an RDT, and tune producer load balancing in the producers connection factory configuration so that each producers messages are evenly processed on a round-robin basis throughout the cluster. Incoming messages can be load balanced among the distributed topic members using the WebLogic JMS connection factory Server Affinity Enabled and Load Balancing Enabled attributes. Disabling affinity can increase network overhead but helps ensure that messages are evenly load balanced across a cluster. The affinity setting has no effect with RDTs. See Load Balancing Messages Across a Distributed Destination in Configuring and Managing JMS for Oracle WebLogic Server. ■ Decrease the WebLogic JMS asynchronous message pipeline size to 1 to prevent additional messages from being pushed to an MDB thread that is already blocked processing a previous message. The default for this setting is 10, and it is configured by a configuring a custom WebLogic connection factory with the Messages Maximum attributed tuned to 1 and XA Enabled set to true, b targeting the connection factory to the same cluster that hosts the distributed topic, and c modifying the MDB so that it references the custom connection factory.

10.4 Configuring for Service Migration