Setting DefaultServedResourceRequiresWsrpRewrite for WSRP Portlets

Capacity Planning 27-3 ■ Development, implementation, and maintenance costs. You can use this information to set realistic performance objectives for your application environment, such as response times, throughput, and load on specific hardware.

27.3 Measuring Your Performance Metrics

After you have determined your performance criteria in Section 27.2, Determining Performance Goals and Objectives , take measurements of the metrics you can use to quantify your performance objectives. Benchmarking key performance indicators provides a performance baseline. See Chapter 4, Monitoring Oracle Fusion Middleware for information on measuring your performance metrics with Oracle Fusion Middleware applications.

27.4 Identifying Bottlenecks in Your System

Bottlenecks, or areas of marked performance degradation, should be addressed while developing your capacity management plan. If possible, profile your applications to pinpoint bottlenecks and improve application performance. Oracle provides the following profilers: ■ Oracle Jrockit Mission Control provides profiling capabilities for processes using Jrockit JVM. http:www.oracle.comtechnologyproductsjrockitmissioncont rolindex.html ■ Oracle Application Diagnostics provides profiling capabilities for java processing using SUN JDK. http:www.oracle.comtechnologysoftwareproductsoemhtdocs jade.html The objective of identifying bottlenecks is to meet your performance goals, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource CPU, memory, or IO can be a bottleneck in the system. Planning for anticipated peak usage, for example, may help minimize the impact of bottlenecks on your performance objectives. See Appendix A, Related Reading and References . There are several ways to address system bottlenecks. Some common solutions include: ■ Using Clustered Configurations ■ Using Connection Pooling ■ Setting the Max HeapSize on JVM ■ Increasing Memory or CPU ■ Segregation of Network Traffic ■ Segregation of Processes and Hardware Interrupt Handlers

27.4.1 Using Clustered Configurations

Clustered configurations distribute work loads among multiple identical cluster member instances. This effectively multiplies the amount of resources available to the distributed process, and provides for seamless fail over for high availability. For more information see Chapter 28, Using Clusters and High Availability Features . 27-4 Oracle Fusion Middleware Performance and Tuning Guide

27.4.2 Using Connection Pooling

You may be able to improve performance by using existing database connections. You can limit the number of connections, timing of the sessions and other parameters by modifying the connection strings. See Section 2.7, Reuse Database Connections for more information on configuring the database connection pools.

27.4.3 Setting the Max HeapSize on JVM

This is a application-specific tunable that enables a trade off between garbage collection times and the number of JVMs that can be run on the same hardware. Large heaps are used more efficiently and often result in fewer garbage collections. More JVM processes offer more fail over points. See Section 2.4, Tune Java Virtual Machines JVMs for more information.

27.4.4 Increasing Memory or CPU

Aggregating more memory andor CPU on a single hardware resource allows localized communication between the instances sharing the same hardware. More physical memory and processing power on a single machine enables the JVMs to scale and run much larger and more powerful instances, especially 64-bit JVMs. Large JVMs tend to use the memory more efficiently, and Garbage Collections tend to occur less frequently. In some cases, adding more CPU means that the machine can have more instruction and data cache available to the processing units, which means even higher processing efficiency. See Section 2.2, Ensure the Hardware Resources are Sufficient for more information.

27.4.5 Segregation of Network Traffic

Network-intensive applications can introduce significant performance issues for other applications using network. Segregating the network traffic of time-critical applications from network-intensive applications, so that they get routed to different network interfaces, may reduce performance impacts. It is also possible to assign different routing priorities to the traffic originating from different network interfaces.

27.4.6 Segregation of Processes and Hardware Interrupt Handlers

When planning for the capacity that a specific hardware resource can handle, it is important to understand that the operating system may not be able to efficiently schedule the JVM processes as well as other system processes and hardware interrupt handlers. The JVM may experience performance impacts if it shares even a few of its CPU cores with the hardware interrupt handlers. For example, disk and network-intensive applications may induce performance impacts that are disproportionate to the load experienced by the CPU. In addition, hardware interrupts can prevent the active Java threads from reaching a GC-safe point efficiently. Separating frequent hardware interrupt handlers from the CPUs running the JVM process can reduce the wait for Garbage Collections to start. It may also be beneficial to dedicate sibling CPUs on a multi-core machine to a single JVM to increase the efficiency of its CPU cache. If multiple processes have to share the CPU, the data and instruction cache can be contaminated with the data and instructions from both processes, thus reducing the amount of the cache used effectively. Assigning the processes to specific CPU cores, however, can make it impossible to use other CPU cores during peak load bursts. The capacity management