Configuring Portlet Timeout Tuning Portlet Configuration

27-4 Oracle Fusion Middleware Performance and Tuning Guide

27.4.2 Using Connection Pooling

You may be able to improve performance by using existing database connections. You can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings. See Section 2.7, Reuse Database Connections for more information on configuring the database connection pools.

27.4.3 Setting the Max HeapSize on JVM

This is a application-specific tunable that enables a trade off between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points. See Section 2.4, Tune Java Virtual Machines JVMs for more information.

27.4.4 Increasing Memory or CPU

Aggregating more memory andor CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially 64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency. See Section 2.2, Ensure the Hardware Resources are Sufficient for more information.

27.4.5 Segregation of Network Traffic

Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.

27.4.6 Segregation of Processes and Hardware Interrupt Handlers

When planning for the capacity that a specific hardware resource can handle, it is important to understand that the operating system may not be able to efficiently schedule the JVM processes as well as other system processes and hardware interrupt handlers. The JVM may experience performance impacts if it shares even a few of its CPU cores with the hardware interrupt handlers. For example, disk and network-intensive applications may induce performance impacts that are disproportionate to the load experienced by the CPU. In addition, hardware interrupts can prevent the active Java threads from reaching a GC-safe point efficiently. Separating frequent hardware interrupt handlers from the CPUs running the JVM process can reduce the wait for Garbage Collections to start. It may also be beneficial to dedicate sibling CPUs on a multi-core machine to a single JVM to increase the efficiency of its CPU cache. If multiple processes have to share the CPU, the data and instruction cache can be contaminated with the data and instructions from both processes, thus reducing the amount of the cache used effectively. Assigning the processes to specific CPU cores, however, can make it impossible to use other CPU cores during peak load bursts. The capacity management Capacity Planning 27-5 plan should include a determination on whether the CPUs should be used more efficiently for the nominal load, or should there be some extra capacity for a burst of activity.

27.5 Implementing a Capacity Management Plan

Once you have defined your performance objectives, measured your workload, and identified any bottlenecks, you must create and implement a capacity management plan. The goal of your plan should be to meet or exceed your performance objectives especially during peak usage periods and to allow for future workload increases. To achieve your performance objectives, you must implement your management plan and then continuously monitor the performance metrics as discussed in Chapter 4, Monitoring Oracle Fusion Middleware . Since no two deployments are identical, its virtually impossible to illustrate how a capacity management plan would be implemented for all configurations. Capacity planning is an iterative process and your plan must be calibrated as changes in your workload or environment change. The following section provides key factors that should be addressed in the plan:

27.5.1 Hardware Configuration Requirements

There is no single formula for determining your hardware requirements. The process of determining what type of hardware and software configuration involves assessment of your system performance goals and an understanding of your application. Capacity planning for server hardware should focus on maximum performance requirements. The hardware requirements you have today are likely to change. Your plan should allow for workload increases, environment changes such as added servers or 3rd party services, software upgrades operating systems, middleware or other applications, network connectivity and network protocols.

27.5.1.1 CPU Requirements

Your target CPU usage should not be 100, you should determine a target CPU utilization based on your application needs, including CPU cycles for peak usage. If your CPU utilization is optimized at 100 during normal load hours, you have no capacity to handle a peak load. In applications that are latency sensitive and maintaining the ability for a fast response time is important, high CPU usage approaching 100 utilization can reduce response times while throughput stays constant or even increases because of work queuing up in the server. For such applications, a 70 - 80 CPU utilization recommended. A good target for non-latency sensitive applications is about 90.

27.5.1.2 Memory Requirements

Memory requirements are determined by the optimal heap size for the applications you are going to use, for each JVM co-located on the same hardware. Each JVM needs up to 500MB in addition to the optimal heap size; the actual impact to performance depends on the JVM brand, and on the type of application being run. For example, applications with more Java classes loaded need more space for compiled classes. 32-bit JVMs normally cannot exceed a limit of approximately 3GB on some architecture when a limit is imposed by the hardware architecture and the Operating System. It is recommended to reserve some memory for the Operating System, IO buffers and shared-memory devices.