Manual Monitoring Application Servers

User Manual: Monitoring Application Servers

Open the PDF directly: View PDF PDF.
Page Count: 522 [warning: Documents this large are best viewed by clicking the View PDF Link!]

Monitoring Application Servers
eG Enterprise v6.0
Restricted Rights Legend
The information contained in this document is confidential and subject to change without notice. No
part of this document may be reproduced or disclosed to others without the prior permission of eG
Innovations Inc. eG Innovations Inc. makes no warranty of any kind with regard to the software and
documentation, including, but not limited to, the implied warranties of merchantability and fitness for
a particular purpose.
Trademarks
Microsoft Windows, Windows NT, Windows 2003, and Windows 2000 are either registered trademarks
or trademarks of Microsoft Corporation in United States and/or other countries.
The names of actual companies and products mentioned herein may be the trademarks of their
respective owners.
Copyright
©2014 eG Innovations Inc. All rights reserved.
Table of Contents
INTRODUCTION ................................................................................................................................................................................................... 1
MONITORING WEBLOGIC APPLICATION SERVERS................................................................................................................................. 2
2.1 MONITORING THE WEBLOGIC SERVER VER. 9.0 (AND ABOVE) .................................................................................................................... 3
2.1.1 The Application Processes Layer .................................................................................................................................................... 4
2.1.2 The JVM Layer ............................................................................................................................................................................... 8
2.1.3 The WebLogic Service Layer ........................................................................................................................................................ 82
2.1.4 The WebLogic Database Layer ................................................................................................................................................... 142
2.1.5 The WebLogic EJB Layer ........................................................................................................................................................... 157
2.2 MONITORING THE WEBLOGIC SERVER VER. 6/7/8 ................................................................................................................................... 180
2.2.1 The JVM Layer ........................................................................................................................................................................... 180
2.2.2 The WebLogic Service Layer ...................................................................................................................................................... 181
2.2.3 The WebLogic Database Layer ................................................................................................................................................... 182
2.2.4 The WebLogic EJB Layer ........................................................................................................................................................... 183
MONITORING WEBSPHERE APPLICATION SERVERS .......................................................................................................................... 184
3.1 MONITORING THE WEBSPHERE APPLICATION SERVER VERSION 4/5.X .................................................................................................... 184
3.1.1 The WebSphere Service Layer .................................................................................................................................................... 185
3.1.2 The WebSphere Database Layer ................................................................................................................................................. 196
3.1.3 The WebSphere EJB Layer ......................................................................................................................................................... 201
3.1.4 The WebSphere Web Layer ......................................................................................................................................................... 206
3.2 MONITORING THE WEBSPHERE APPLICATION SERVER 6.0 (AND ABOVE) ................................................................................................. 240
3.2.1 The JVM Layer ........................................................................................................................................................................... 241
3.2.2 The WAS Service Layer ............................................................................................................................................................... 242
3.2.3 The WAS Database Layer ........................................................................................................................................................... 260
3.2.4 The WAS EJB Layer .................................................................................................................................................................... 264
3.2.5 The WAS Web Layer ................................................................................................................................................................... 278
MONITORING IPLANET APPLICATION SERVERS ................................................................................................................................. 299
4.1 THE NAS SERVICE LAYER ....................................................................................................................................................................... 300
4.1.1 NAS SNMP Test .......................................................................................................................................................................... 300
MONITORING COLDFUSION APPLICATION SERVERS ......................................................................................................................... 303
5.1 THE CF SERVICE LAYER .......................................................................................................................................................................... 304
5.1.1 Coldfusion Test ........................................................................................................................................................................... 304
5.1.2 Coldfusion Log Test .................................................................................................................................................................... 306
5.2 THE CF DB ACCESS LAYER ..................................................................................................................................................................... 307
MONITORING SILVERSTREAM APPLICATION SERVERS ................................................................................................................... 309
6.1 THE SILVER STREAM SERVICE LAYER...................................................................................................................................................... 310
6.1.1 SilverStream Test ........................................................................................................................................................................ 310
MONITORING JRUN APPLICATION SERVERS ........................................................................................................................................ 313
7.1 THE JVM LAYER ...................................................................................................................................................................................... 314
7.2 THE JRUN SERVICE LAYER ..................................................................................................................................................................... 314
7.2.1 JRun Threads Test ...................................................................................................................................................................... 315
7.2.2 JRun Service Test ........................................................................................................................................................................ 317
7.2.3 JRun Server Test ......................................................................................................................................................................... 319
MONITORING ORION SERVERS .................................................................................................................................................................. 321
8.1 THE JAVA APPLICATION SERVER LAYER .................................................................................................................................................. 321
8.1.1 Java Server Web Access Test ...................................................................................................................................................... 322
MONITORING TOMCAT SERVERS.............................................................................................................................................................. 324
9.1 THE JVM LAYER ...................................................................................................................................................................................... 326
9.1.1 JMX Connection to JVM ............................................................................................................................................................. 326
9.1.2 JVM File Descriptors Test .......................................................................................................................................................... 328
9.1.3 Java Classes Test ........................................................................................................................................................................ 329
9.1.4 JVM Garbage Collections Test ................................................................................................................................................... 332
9.1.5 JVM Threads Test ....................................................................................................................................................................... 339
9.1.6 JVM Cpu Usage Test .................................................................................................................................................................. 345
9.1.7 JVM Memory Usage Test ............................................................................................................................................................ 349
9.1.8 JVM Uptime Test ........................................................................................................................................................................ 354
9.1.9 Tests Disabled by Default for a Tomcat Server ........................................................................................................................... 358
9.2 THE WEB SERVER LAYER ........................................................................................................................................................................ 370
9.2.1 Tomcat Cache Test..................................................................................................................................................................... 371
9.2.2 TomcatThreads Test .................................................................................................................................................................... 373
9.3 THE JAVA APPLICATION SERVER LAYER .................................................................................................................................................. 377
9.3.1 Tomcat Applications Test ............................................................................................................................................................ 378
9.3.2 Tomcat Connectors Test ............................................................................................................................................................. 380
9.3.3 Tomcat Jsps Test ......................................................................................................................................................................... 382
9.3.4 TomcatServlets Test .................................................................................................................................................................... 384
MONITORING SUNONE APPLICATION SERVERS .................................................................................................................................. 387
10.1 THE JVM LAYER ...................................................................................................................................................................................... 388
10.2 THE SUNONE HTTP LAYER .................................................................................................................................................................... 389
10.2.1 SunONE Http Test ....................................................................................................................................................................... 389
10.3 THE SUNONE DB LAYER ......................................................................................................................................................................... 391
10.3.1 SunONE Jdbc Test ...................................................................................................................................................................... 391
10.4 THE SUNONE TRANSACTIONS LAYER ..................................................................................................................................................... 392
10.4.1 SunONE Transactions Test ......................................................................................................................................................... 393
10.5 THE SUNONE EJB LAYER........................................................................................................................................................................ 394
10.5.1 SunONE Ejb Cache Test ............................................................................................................................................................. 394
10.5.2 SunONE EJB Pools Test ............................................................................................................................................................. 396
MONITORING ORACLE 9I APPLICATION SERVERS.............................................................................................................................. 399
11.1 THE ORACLE JVM LAYER ........................................................................................................................................................................ 401
11.1.1 Oracle 9i Jvm Test ...................................................................................................................................................................... 401
11.1.2 Java Transactions Test ............................................................................................................................................................... 402
11.1.3 Java Classes Test ........................................................................................................................................................................ 404
11.1.4 JVM Threads Test ....................................................................................................................................................................... 407
11.1.5 JVM Cpu Usage Test .................................................................................................................................................................. 413
11.1.6 JVM Memory Usage Test ............................................................................................................................................................ 417
11.1.7 JVM Uptime Test ........................................................................................................................................................................ 420
11.1.8 JVM Garbage Collections Test ................................................................................................................................................... 424
11.1.9 JVM Memory Pool Garbage Collections Test ............................................................................................................................. 427
11.1.10 JMX Connection to JVM ............................................................................................................................................................. 431
11.1.11 JVM File Descriptors Test .......................................................................................................................................................... 433
11.2 THE ORACLE JDBC LAYER ...................................................................................................................................................................... 434
11.2.1 Oracle 9i Drivers Test ................................................................................................................................................................ 434
11.2.2 Oracle 9i Connection Cache Test ............................................................................................................................................... 435
11.2.3 Oracle 9i Transactions Test ........................................................................................................................................................ 436
11.3 THE ORACLE WEB MODULES LAYER ....................................................................................................................................................... 437
11.3.1 Oracle 9i Web Modules Test ....................................................................................................................................................... 438
11.4 THE ORACLE WEB CONTEXT LAYER ........................................................................................................................................................ 439
11.4.1 Oracle 9i Web Contexts Test ....................................................................................................................................................... 440
11.5 THE ORACLE J2EE LAYER ....................................................................................................................................................................... 441
11.5.1 Oracle 9i Jsps Test ...................................................................................................................................................................... 441
11.5.2 Oracle 9i Servlets Test ................................................................................................................................................................ 443
MONITORING ORACLE 10G APPLICATION SERVERS .......................................................................................................................... 446
12.1 THE ORACLE J2EE LAYER ....................................................................................................................................................................... 447
12.1.1 Oracle Ejbs Test ......................................................................................................................................................................... 447
12.1.2 Oracle Jms Store Test ................................................................................................................................................................. 450
MONITORING ORACLE FORMS SERVERS ............................................................................................................................................... 452
13.1 THE FORMS PROCESSES LAYER ................................................................................................................................................................ 453
13.1.1 F9i Processes Test ...................................................................................................................................................................... 454
13.2 THE FORMS SERVER LAYER ..................................................................................................................................................................... 455
13.2.1 F9i Sessions Test......................................................................................................................................................................... 455
13.2.2 F9i Response Test ....................................................................................................................................................................... 457
13.3 THE FORMS USER LAYER ......................................................................................................................................................................... 458
13.3.1 F9i Users Test ............................................................................................................................................................................. 458
MONITORING BORLAND ENTERPRISE SERVERS (BES) ...................................................................................................................... 461
14.1 THE AGENT LAYER .................................................................................................................................................................................. 462
14.1.1 Agent Statistics Test .................................................................................................................................................................... 462
14.2 THE PARTITION LAYER ............................................................................................................................................................................ 463
14.2.1 Partition Stat Test ....................................................................................................................................................................... 463
14.3 THE PARTITION SERVICES LAYER ............................................................................................................................................................ 464
14.3.1 CMP Test .................................................................................................................................................................................... 465
14.3.2 Ejb Cont Stat Test ....................................................................................................................................................................... 466
14.3.3 JDBC1 Test ................................................................................................................................................................................. 467
14.3.4 JDBC2 Test ................................................................................................................................................................................. 468
14.3.5 SFBeans Test .............................................................................................................................................................................. 469
14.3.6 Transactions Test ........................................................................................................................................................................ 471
MONITORING JBOSS APPLICATION SERVERS ....................................................................................................................................... 473
15.1 THE JVM LAYER ...................................................................................................................................................................................... 475
15.2 THE JB SERVER LAYER ............................................................................................................................................................................ 476
15.2.1 Jboss JVM Test ........................................................................................................................................................................... 476
15.2.2 Jboss Server Test ........................................................................................................................................................................ 478
15.2.3 Jboss Thread Pools Test ............................................................................................................................................................. 480
15.3 THE JB CONNECTION POOL LAYER .......................................................................................................................................................... 482
15.3.1 Jboss Connection Pools Test....................................................................................................................................................... 483
15.4 THE JB SERVLET LAYER .......................................................................................................................................................................... 485
15.4.1 Jboss Servlets Test ...................................................................................................................................................................... 486
15.5 THE JB EJB LAYER .................................................................................................................................................................................. 488
15.5.1 Jboss Ejbs Test ............................................................................................................................................................................ 488
15.6 THE JB MQ LAYER .................................................................................................................................................................................. 490
15.6.1 Jboss MQ Queues Test ................................................................................................................................................................ 491
15.6.2 Jboss MQ Topics Test ................................................................................................................................................................. 494
MONITORING DOMINO APPLICATION SERVERS .................................................................................................................................. 497
16.1 ENABLING SNMP ON A DOMINO SERVER ................................................................................................................................................ 497
16.1.1 Enabling SNMP for a Domino Server on Solaris ........................................................................................................................ 498
16.1.2 Enabling SNMP for a Domino Server on Linux .......................................................................................................................... 500
16.1.3 Enabling SNMP for a Domino Server on AIX ............................................................................................................................. 502
16.1.4 Enabling SNMP for a Domino Server on Windows..................................................................................................................... 503
16.2 THE DOMINO SERVICE LAYER .................................................................................................................................................................. 508
16.2.1 Lotus Notes Web Server Test ...................................................................................................................................................... 508
16.2.2 Lotus Notes Replication Test....................................................................................................................................................... 511
CONCLUSION .................................................................................................................................................................................................... 515
Table of Figures
Figure 2.1: Layer model of the WebLogic Application server .................................................................................................................................. 4
Figure 2.2: The tests mapped to the Application Processes layer of the WebLogic server ........................................................................................ 4
Figure 2.3: The tests associated with the JVM layer.................................................................................................................................................. 9
Figure 2.4: The layers through which a Java transaction passes .............................................................................................................................. 31
Figure 2.5: The detailed diagnosis of the Slow transactions measure ...................................................................................................................... 43
Figure 2.6: The Method Level Breakup section in the At-A-Glance tab page ......................................................................................................... 44
Figure 2.7: The Component Level Breakup section in the At-A-Glance tab page ................................................................................................... 44
Figure 2.8: Query Details in the At-A-Glance tab page ........................................................................................................................................... 45
Figure 2.9: Detailed description of the query clicked on ......................................................................................................................................... 45
Figure 2.10: The Trace tab page displaying all invocations of the method chosen from the Method Level Breakup section .................................. 46
Figure 2.11: The Trace tab page displaying all methods invoked at the Java layer/sub-component chosen from the Component Level Breakup
section ............................................................................................................................................................................................................. 47
Figure 2.12: Queries displayed in the SQL/Error tab page ...................................................................................................................................... 48
Figure 2.13: Errors displayed in the SQL/Error tab page......................................................................................................................................... 48
Figure 2.14: The detailed diagnosis of the Error transactions measure .................................................................................................................... 49
Figure 2.15: Tests mapping to the WebLogic Service layer .................................................................................................................................... 82
Figure 2.16: The detailed diagnosis of the Max execution time measure ................................................................................................................ 93
Figure 2.17: Tests mapping to the WebLogic Database layer ................................................................................................................................ 142
Figure 2.18: Tests mapping to the WebLogic EJB layer ....................................................................................................................................... 158
Figure 2.19: The detailed diagnosis of the Cache hit ratio measure ....................................................................................................................... 165
Figure 2.20: The detailed diagnosis of the Threads timeout measure .................................................................................................................... 169
Figure 2.21: Layer model of the WebLogic Application server ............................................................................................................................. 180
Figure 2.22: The tests associated with the JVM layer ............................................................................................................................................ 181
Figure 2.23: Tests mapped to the WebLogic Service layer.................................................................................................................................... 182
Figure 2.24: Tests mapping to the WebLogic Database layer ................................................................................................................................ 182
Figure 2.25: Tests mapping to the WebLogic EJB layer ....................................................................................................................................... 183
Figure 3.1: Layer model for a WebSphere application server 4/5.x .................................................................................................................... 185
Figure 3.2: Tests mapping to the WebSphere Service layer .................................................................................................................................. 185
Figure 3.3: Tests mapping to the WebSphere Database layer ................................................................................................................................ 197
Figure 3.4: Tests mapping to the WebSphere EJB layer ........................................................................................................................................ 202
Figure 3.5: Tests mapping to the WebSphere Web layer ....................................................................................................................................... 207
Figure 3.6: Layer model of the WebSphere Application server 6.0 (or above) ...................................................................................................... 241
Figure 3.7: The tests mapped to the JVM layer ..................................................................................................................................................... 242
Figure 3.8: The tests associated with the WAS Service layer ................................................................................................................................ 243
Figure 3.9: The test associated with the WAS Database layer ............................................................................................................................... 260
Figure 3.10: The tests associated with the WAS EJB layer ................................................................................................................................... 264
Figure 3.11: The tests associated with the WAS Web layer .................................................................................................................................. 278
Figure 4.1: Model of an iPlanet Application server showing the different layers monitored for the server ........................................................... 299
Figure 4.2: The NasSnmpTest that maps to the NAS Service layer of an iPlanet application server ..................................................................... 300
Figure 5.1: Layer model for a Coldfusion server ................................................................................................................................................... 303
Figure 5.2: The ColdfusionTest that maps to the CF Service layer of a Coldfusion application server ................................................................. 304
Figure 5.3: The ColdfusionTest that maps to the CF DB Access layer of a Coldfusion application server ........................................................... 308
Figure 6.1: Layer model for a SilverStream application server ............................................................................................................................. 310
Figure 6.2: Tests mapping to the Silver Stream Service layer ............................................................................................................................... 310
Figure 7.1: Layer model for a JRun application server .......................................................................................................................................... 314
Figure 7.2: The tests mapped to the JVM layer ..................................................................................................................................................... 314
Figure 7.3: Tests mapping to the JRUN Service layer ........................................................................................................................................... 315
Figure 8.1: Layer model of an Orion/Tomcat server ............................................................................................................................................. 321
Figure 8.2: Tests associated with the Java Application Server layer ..................................................................................................................... 322
Figure 9.1: The layer model of the Tomcat server ................................................................................................................................................. 324
Figure 9.2: Tests associated with JVM layer ......................................................................................................................................................... 326
Figure 9. 3: The detailed diagnosis of the CPU utilization of JVM measure ......................................................................................................... 349
Figure 9. 4: The detailed diagnosis of the Used memory measure ......................................................................................................................... 354
Figure 9.5: Tests associated with Web Server layer .............................................................................................................................................. 371
Figure 9.6: Tests associated with Java Application Server layer ........................................................................................................................... 377
Figure 10.1: The layer model of a SunONE application server ............................................................................................................................. 388
Figure 10.2: The tests mapped to the JVM layer ................................................................................................................................................... 388
Figure 10.3: Tests associated with the SunONE HTTP layer ................................................................................................................................ 389
Figure 10.4: Tests associated with the SunONE DB layer ..................................................................................................................................... 391
Figure 10.5: Test associated with the SunONE Transactions layer........................................................................................................................ 393
Figure 10.6: Tests associated with the SunONE EJB layer.................................................................................................................................... 394
Figure 11.1: The layer model of the Oracle 9i AS ................................................................................................................................................. 400
Figure 11.2: Tests associated with the Oracle JVM layer ...................................................................................................................................... 401
Figure 11.3: The tests associated with the Oracle==== JDBC layer ...................................................................................................................... 434
Figure 11.4: The tests associated with the Oracle Web Modules layer .................................................................................................................. 438
Figure 11.5: The tests associated with the Oracle Web Context layer ................................................................................................................... 440
Figure 11.6: The tests associated with the Oracle J2EE layer ................................................................................................................................ 441
Figure 12.1: The layer model of the Oracle 10G application server ...................................................................................................................... 446
Figure 12.2: The tests associated with the Oracle J2EE layer ................................................................................................................................ 447
Figure 13.1: The layer model of an Oracle Forms server ...................................................................................................................................... 452
Figure 13.2: The tests associated with the Forms Processes layer ......................................................................................................................... 453
Figure 13.3: Tests associated with the Forms Server layer .................................................................................................................................... 455
Figure 13.4: Tests associated with the Forms User layer ....................................................................................................................................... 458
Figure 14.1: The layer model of a Borland Enterprise server ................................................................................................................................ 461
Figure 14.2: The tests associated with the Agent layer .......................................................................................................................................... 462
Figure 14.3: The tests associated with the Partition layer ...................................................................................................................................... 463
Figure 14.4: The tests associated with the Partition Services layer ........................................................................................................................ 465
Figure 15.1: The layer model of a JBoss application server .................................................................................................................................. 474
Figure 15.2: The tests mapped to the JVM layer ................................................................................................................................................... 475
Figure 15.3: The tests associated with the JB Server layer .................................................................................................................................... 476
Figure 15.4: The test associated with the JB Connection Pool layer...................................................................................................................... 483
Figure 15.5: The test associated with the JB Servlet layer ..................................................................................................................................... 486
Figure 15.6: The test associated with the JB EJB layer ......................................................................................................................................... 488
Figure 15.7: The tests associated with the JB MQ layer ........................................................................................................................................ 491
Figure 16.1: The Add/Remove Programs option in the Control Panel window ..................................................................................................... 503
Figure 16.2: Select the Add/Remove Windows Components option ..................................................................................................................... 504
Figure 16.3: Selecting the Management and Monitoring Tools option .................................................................................................................. 504
Figure 16.4: Selecting the SNMP option ............................................................................................................................................................... 505
Figure 16.5: Providing the path to the Windows 2000 CD .................................................................................................................................... 505
Figure 16.6: Layer model of the Domino application server ................................................................................................................................. 507
Figure 16.7: The tests associated with the Domino Service Layer......................................................................................................................... 508
I n t r o d u c t i o n
1
Introduction
To achieve scalability and performance, most Internet application deployments have evolved into
multi-tier infrastructures where the web server tier serves as the web front-end, the business logic is
executed on middleware application servers, and the backend storage and access is provided via
database servers. While multi-tier infrastructures offer a variety of scalability and extensibility
benefits, they are also more difficult to operate and manage. When a problem occurs (e.g., a
slowdown), an administrator often has difficulty in figuring out which application(s) in the multi-tier
infrastructure could be the cause of the problem - i.e., is it the network? Or the database? Or the
WebLogic server? Or the middleware? Or the web server? Comprehensive, routine monitoring of every
infrastructure application and network device is essential to be able to troubleshoot effectively when
problems occur.
The application server middleware that hosts and supports the business logic components is often the
most complex of the multi-tier infrastructure. To offer peak performance, an application server
provides a host of complex functions and features including database connection pooling, thread
pooling, database result caching, session management, bean caching and management etc. To ensure
that the application server is functioning effectively at all times, all of these functions have to be
monitored and tracked proactively and constantly.
eG Enterprise offers specialized monitoring models for each of the most popular application servers
such as WebLogic, WebSphere, ColdFusion, Oracle 9i/10G, etc. A plethora of metrics relating to the
health of the application servers can be monitored in real-time and alerts can be generated based on
user-defined thresholds or auto-computed baselines. These metrics enable administrators to quickly
and accurately determine server availability and responsiveness, resource usage at the host-level and
at the application server level, how well the application server processes requests, how quickly the
server completes transactions, overall server security, etc.
This document engages you in an elaborate discussion on how eG Enterprise monitors each of the
popular web application servers in the market.
Chapter
1
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
2
Monitoring WebLogic
Application Servers
BEA WebLogic Server is a fully-featured, standards-based application server providing the foundation
on which an enterprise can build reliable, scalable, and manageable applications. With its
comprehensive set of features, compliance with open standards, multi-tiered architecture, and support
for component-based development, WebLogic Server provides the underlying core functionality
necessary for the development and deployment of business-driven applications. Any issue with the
functioning of the WebLogic server, if not troubleshooted on time, can rupture the very core of these
business-critical applications, causing infrastructure downtime and huge revenue losses. This justifies
the need for continuously monitoring the external availability and internal operations of the WebLogic
server.
eG Enterprise provides two distinct models for monitoring WebLogic servers - the WebLogic model and
the WebLogic (6/7/8) model. As the names suggest, the WebLogic 6/7/8 model can be used to
monitor the WebLogic server version 6, 7, and 8, and the WebLogic model can be used for monitoring
WebLogic 9.0 (and above).
Regardless of the model used, the metrics obtained enable administrators to find answers to the
following persistent performance questions:
Server monitoring
Is the WebLogic process running?
Is the memory usage of the server increasing over time?
Is the server's request processing rate unusually high?
JVM monitoring
Is the JVM heap size adequate?
Is the garbage collection tuned well or is the JVM spending too much
time in garbage collection?
Thread monitoring
Are the WebLogic servers execute queues adequately sized?
Are there too many threads waiting to be serviced, thereby causing
slow response time?
Security monitoring
How many invalid login attempts have been made to the WebLogic
server?
Are these attempts recurring?
Chapter
2
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
3
JMS monitoring
Are there many pending messages in the messaging server?
Is the message traffic unusually high?
Connector
monitoring
What is the usage pattern of connections in a connector pool?
Cluster monitoring
Are all the WebLogic servers in the cluster currently available?
Is the load being balanced across the cluster?
Transaction
monitoring
How many user transactions are happening?
Are there too many rollbacks occurring?
Servlet monitoring
Which servlet(s) are being extensively accessed?
What is the average invocation time for each servlet?
EJB Pool monitoring
Are there adequate numbers of beans in a bean pool?
How many beans are in use? Are there any clients waiting for a bean?
EJB Cache
monitoring
Is the cache adequately sized or are there too many cache misses?
What is the rate of EJB activations and passivations?
EJB Lock monitoring
Is there contention for locks?
How many beans are locked?
How many attempts have been made to acquire a lock for each bean?
JDBC Connection
monitoring
Are all the JDBC pools available?
Is each pool adequately sized?
What are the peak usage times and values?
How many connection leaks have occurred?
JDBC call monitoring
How many JDBC calls have been made?
What was average response time of those calls?
What are the queries that take a long time to execute?
The sections that will follow discuss each of these models in great detail.
2.1 Monitoring the WebLogic Server Ver. 9.0 (and above)
The special WebLogic monitoring model (see Figure 2.1) that eG Enterprise offers provides uses JMX
(Java Management extension), the new standard for managing java components, for monitoring the
WebLogic server 9.0 (and above). JMX allows users to instrument their applications and control or
monitor them using a management console. Using this mechanism, over a hundred critical metrics
relating to a WebLogic server instance can be monitored in real-time and alerts can be generated
based on user-defined thresholds or auto-computed baselines.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
4
Figure 2.1: Layer model of the WebLogic Application server
The sections that will follow discuss the top 4 layers of Figure 2.1, and the metrics they report. In
addition, the Application Processes layer will also be touched upon, as it includes an additional test for
WebLogic servers called the Windows Service Resources test.
The remaining layers have been extensively dealt with in the Monitoring Unix and Window Servers
document.
2.1.1 The Application Processes Layer
The default Processes test mapped to this layer reports the availability and resource usage of the
processes that are critical to the functioning of the WebLogic server. For more details about the
Processes test, refer to the Monitoring Unix and Windows Servers document.
If the WebLogic server is operating on a Windows host, then, optionally, you can configure a Windows
Service Resources test for the server. This test reports the availability and resource usage of a
configured service.
Figure 2.2: The tests mapped to the Application Processes layer of the WebLogic server
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
5
2.1.1.1 Windows Service Resources Test
For a configured service, this test reports whether that service is up and running or not. In addition,
the test automatically determines the ID and name of the process that corresponds to the configured
service, and measures the CPU and memory usage of that process and the I/O load imposed by the
process. This test executes only on Windows hosts.
This test is disabled by default. Enable this test only if the WebLogic server is operating on a Windows host. To
enable the test, go to the ENABLE / DISABLE TESTS page using the menu sequence : Agents -> Tests ->
Enable/Disable, pick WebLogic as the Component type, Performance as the Test type, choose this test
from the DISABLED TESTS list, and click on the << button to move the test to the ENABLED TESTS list.
Finally, click the Update button.
Purpose
Reports whether the configured service is available or not, automatically determines
the ID and name of the process that corresponds to the configured service, and
measures the CPU and memory usage of that process and the I/O load imposed by
the process
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. SERVICENAME - Specify the exact name of the service to be monitored. For eg.,
to monitor the World Wide Web Publishing service, the SERVICENAME should
be: W3SVC. If your service name embeds white spaces, then specify the service
name within "double-quotes".
Outputs of the
test
One set of results for the SERVICENAME configured
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Service availability:
Indicates whether the
configured service is
available or not.
Percent
If the service exists on the target host
and is currently running, then this
measure will report the value 100. On
the other hand, if the service exists but
is not running, then this measure will
report the value 0. If the service does
not exist, then the test will report the
value Unknown.
CPU utiization:
Indicates the
percentage of CPU
utilized by the process
that corresponds to the
configured
SERVICENAME
Percent
A very high value could indicate that the
service is consuming excessive CPU
resources.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
6
Memory utilization:
For the process
corresponding to the
specified
SERVICENAME, this
value represents the
ratio of the resident set
size of the process to
the physical memory of
the host system,
expressed as a
percentage.
Percent
A sudden increase in memory utilization
for a process may be indicative of
memory leaks in the application.
Handle count:
Indicates the number of
handles opened by the
process mapped to the
configured
SERVICENAME.
Number
An increasing trend in this measure is
indicative of a memory leak in the
service.
Number of threads:
Indicates the number of
threads that are used
by the process that
corresponds to the
configured
SERVICENAME.
Number
Virtual memory used:
Indicates the amount of
virtual memory that is
being used by the
process that
corresponds to the
configured
SERVICENAME.
MB
I/O data rate:
Indicates the rate at
which the process
mapped to the
configured
SERVICENAME is
reading and writing
bytes in I/O operations.
Kbytes/Sec
This value counts all I/O activity
generated by a process and includes file,
network and device I/Os.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
7
I/O data operations:
Indicates the rate at
which the process
corresponding to the
specified SERVICENAME
is issuing read and write
data to file, network
and device I/O
operations.
Operations/Se
c
I/O read data rate:
Indicates the rate at
which the process that
corresponds to the
configured SERVICE
NAME is reading data
from file, network and
device I/O operations.
Kbytes/Sec
I/O write data rate:
Indicates the rate at
which the process (that
corresponds to the
configured
SERVICENAME) is
writing data to file,
network and device I/O
operations.
Kbytes/Sec
Page fault rate:
Indicates the total rate
at which page faults are
occurring for the
threads of the process
that maps to the
configured
SERVICENAME.
Faults/Sec
A page fault occurs when a thread refers
to a virtual memory page that is not in
its working set in main memory. This
may not cause the page to be fetched
from disk if it is on the standby list and
hence already in main memory, or if it is
in use by another process with
whom the page is shared.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
8
Memory working set:
Indicates the current
size of the working set
of the process that
maps to the configured
SERVICENAME.
MB
The Working Set is the set of memory
pages touched recently by the threads in
the process. If free memory in the
computer is above a threshold, pages
are left in the Working Set of a process
even if they are not in use.
When free memory falls below a
threshold, pages are trimmed from
Working Sets. If they are needed they
will then be soft-faulted back into the
Working Set before leaving main
memory.
By tracking the working set of a process
over time, you can determine if the
application has a memory
leak or not.
2.1.2 The JVM Layer
A Java virtual machine (JVM), an implementation of the Java Virtual Machine Specification, interprets
compiled java binary code for a computer's processor (or "hardware platform") so that it can perform
a Java program's instructions. The Java Virtual Machine Specification defines an abstract -- rather
than a real -- machine or processor. The Specification specifies an instruction set, a set of registers, a
stack, a garbage heap, and a method area.
The tests associated with the JVM layer of WebLogic enables administrators to perform the following
functions:
Assess the effectiveness of the garbage collection activity performed on the JVM heap
Monitor WebLogic thread usage
Evaluate the performance of the BEA JRockit JVM
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
9
Figure 2.3: The tests associated with the JVM layer
2.1.2.1 WebLogic Test
This test monitors the performance of a WebLogic server by tracking the rate of requests processed by
the server, the number of requests waiting for processing, and the percentage of heap usage by the
server. While the rate of requests processed and the number of queued requests can be indicative of
performance problems with the WebLogic server, the percentage heap usage can be indicative of the
reason for the problem.
The heap size determines how often, and for how long garbage collection is performed by the Java
Virtual Machine (JVM) that hosts the WebLogic server. The Java heap is a repository for live objects,
dead objects, and free memory. When the JVM runs out of memory in the heap, all execution in the
JVM stops while a Garbage Collection (GC) algorithm goes through memory and frees space that is no
longer required by an application. This is an obvious performance hit because users accessing a
WebLogic server must wait while GC happens. No server-side work can be done during GC.
Consequently, the heap size must be tuned to minimize the amount of time that the JVM spends in
garbage collection, while at the same time maximizing the number of clients that the server can
handle at a given time. For Java 2 environments, it is recommended that the heap size be set to be as
possible without causing the host system to "swap" pages to disk (use the output of eG Enterprise's
SystemTest to gauge the amount of swapping being performed by the operating system).
Purpose
To measure statistics pertaining to a WebLogic application server
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
10
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. SNMPPORT The port number on which the WebLogic server is exposing its
SNMP MIB (relevant to WebLogic server 5.1 only). For version 6.0 and above,
enter “none” in this text box.
5. SNMPVERSION By default, the eG agent supports SNMP version 1. Accordingly,
the default selection in the SNMPVERSION list is v1. However, if a different SNMP
framework is in use in your environment, say SNMP v2 or v3, then select the
corresponding option from this list.
6. SNMPCOMMUNITY The SNMP community string to be used with the SNMP
query to access the WebLogic server’s MIB (relevant to WebLogic server 5.1
only). For version 6.0 and above, enter “none” in this text box.
7. USERNAME This parameter appears only when v3 is selected as the
SNMPVERSION. SNMP version 3 (SNMPv3) is an extensible SNMP Framework
which supplements the SNMPv2 Framework, by additionally supporting message
security, access control, and remote SNMP configuration capabilities. To extract
performance statistics from the MIB using the highly secure SNMP v3 protocol,
the eG agent has to be configured with the required access privileges in other
words, the eG agent should connect to the MIB using the credentials of a user
with access permissions to be MIB. Therefore, specify the name of such a user
against the USERNAME parameter.
8. AUTHPASS Specify the password that corresponds to the above-mentioned
USERNAME. This parameter once again appears only if the SNMPVERSION
selected is v3.
9. CONFIRM PASSWORD Confirm the AUTHPASS by retyping it here.
10. AUTHTYPE This parameter too appears only if v3 is selected as the
SNMPVERSION. From the AUTHTYPE list box, choose the authentication
algorithm using which SNMP v3 converts the specified USERNAME and
PASSWORD into a 32-bit format to ensure security of SNMP transactions. You
can choose between the following options:
MD5 Message Digest Algorithm
SHA Secure Hash Algorithm
11. ENCRYPTFLAG This flag appears only when v3 is selected as the
SNMPVERSION. By default, the eG agent does not encrypt SNMP requests.
Accordingly, the ENCRYPTFLAG is set to NO by default. To ensure that SNMP
requests sent by the eG agent are encrypted, select the YES option.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
11
12. ENCRYPTTYPE If the ENCRYPTFLAG is set to YES, then you will have to
mention the encryption type by selecting an option from the ENCRYPTTYPE list.
SNMP v3 supports the following encryption types:
DES Data Encryption Standard
AES Advanced Encryption Standard
13. ENCRYPTPASSWORD Specify the encryption password here.
14. CONFIRM PASSWORD Confirm the encryption password by retyping it here.
15. USER The admin user name of the WebLogic server being monitored.
16. PASSWORD The password of the specified admin user
17. CONFIRM PASSWORD Confirm the password by retyping it here.
18. ENCRYPTPASS - If the specified password needs to be encrypted, set the
ENCRYPTPASS flag to YES. Otherwise, set it to NO. By default, the YES option will
be selected.
Note:
If the USEWARFILE flag is set to No, then make sure that the ENCRYPTPASS flag
is also set to No.
19. SSL Indicate whether the SSL (Secured Socket Layer) is to be used to connect
to the WebLogic server.
20. SERVER - The name of the specific server instance to be monitored for a
WebLogic server (the default value is "localhome")
21. URL The URL to be accessed to collect metrics pertaining to the WebLogic
server. By default, this test connects to a managed WebLogic server and
attempts to obtain the metrics of interest by accessing the local Mbeans of the
server. This parameter can be changed to a value of
http://<adminserverIP>:<adminserverPort>. In this case, the test connects
to the WebLogic admin server to collect metrics pertaining to the managed
server (specified by the HOST and PORT). The URL setting provides the
administrator with the flexibility of determining the WebLogic monitoring
configuration to use.
Note:
If the admin server is to be used for collecting measures for all the managed
WebLogic servers, then it is mandatory that the egurkha war file is deployed to
the admin server, and it is up and running.
22. VERSION - The VERSION textbox indicates the version of the Weblogic server to
be managed. The default value is "none", in which case the test auto-discovers
the weblogic version. If the value of this parameter is not "none", the test uses
the value provided (e.g., 7.0) as the weblogic version (i.e., it does not auto-
discover the weblogic server version). This parameter has been added to
address cases when the eG agent is not able to discover the WebLogic server
version.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
12
23. USEWARFILE - This flag indicates whether/not monitoring is to be done using a
Web archive file deployed on the WebLogic server (in which case, HTTP/HTTPS
is used by the server to connect to the server). If this flag is set to No, the agent
directly connects to the WebLogic server using the T3 protocol (no other file
needs to be deployed on the WebLogic server for this to work). Note that the T3
protocol-based support is available for WebLogic servers ver.9 and ver. 10 only. Also, if the
USEWARFILE parameter is set to No, make sure that the ENCRYPTPASS
parameter is set to No as well.
When monitoring a WebLogic server deployed on a Unix platform particularly, if
the USEWARFILE parameter is set to No, you have to make sure that the eG
agent install user is added to the WebLogic users group.
24. WEBLOGICJARLOCATION - Specify the location of the WebLogic server's java
archive (Jar) file. If the USEWARFILE flag is set to No, then the weblogic.jar file
specified here is used to connect to the corresponding WebLogic server using
the T3 protocol. Note that the T3 protocol-based support is available for WebLogic servers
ver.9 and ver. 10 only.
25. TIMEOUT - Specify the duration (in seconds) within which the SNMP query
executed by this test should time out in the TIMEOUT text box. The default is 10
seconds.
Outputs of the
test
One set of results for each WebLogic application server.
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Throughput:
Rate of requests
processed by the
WebLogic server.
Reqs/Sec
A high request rate is an indicator of
server overload. By comparing the
request rates across application servers,
an operator can gauge the effectiveness
of load balancers (if any) that are in use.
Heap usage percent:
Percentage of heap
space currently in use
by the WebLogic server.
Percent
When the heap used percent reaches
100%, the server will start garbage
collection rather than processing
requests. Hence, a very high percentage
of heap usage (close to 100%) will
dramatically lower performance. In such
a case, consider increasing the heap size
to be used.
Requests queued:
Number of requests
currently waiting to be
processed by the
server.
Number
An increase in number of queued
requests can indicate a bottleneck on the
WebLogic server. One of the reasons for
this could be a bottleneck at the server
or due one or more of the applications
hosted on the server.
Total heap size:
Current heap size of the
WebLogic server's Java
Virtual Machine
MB
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
13
Free heap size:
The currently unused
portion of the WebLogic
server's Java Virtual
Machine
MB
2.1.2.2 WebLogic Threads Test
A WebLogic server (prior to version 9.x) may be configured with different execute queues. By default,
a WebLogic server is configured with one thread queue that is used for execution by all applications
running on a server instance. A common way of improving a WebLogic server’s performance is by
configuring multiple thread execute queues. For eg., a mission-critical application can be assigned to a
specific thread execute queue, thereby guaranteeing it a fixed number of execute threads. Other, less
critical applications may compete for threads in the default execute queue. While using different
thread execute queues can significantly improve performance, if the thread execute queues are not
properly configured or maintained, this could result in less than optimal performance. For eg., you
may find that while one thread queue has a number of idle threads, applications in another thread
execute queue could be waiting for execute threads to become available. In case of the WebLogic
server prior to version 9.x, the WebLogic Threads test monitors the different thread execute queues
configured for the server.
From WebLogic server 9.x onwards however, execute queues are replaced by ‘work managers’.
Therefore, while monitoring WebLogic server 9.x or above, the WebLogic Threads test will report one
set of metrics for every ‘work manager’ configured for the server. Also, the test will take an additional
ThreadPool descriptor, which will report the extent of usage of the thread pool.
Purpose
To report performance statistics pertaining to the thread execute queues of a
WebLogic server instance
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
14
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. USER The admin user name of the WebLogic server being monitored.
5. PASSWORD The password of the specified admin user
6. CONFIRM PASSWORD Confirm the password by retyping it here.
7. ENCRYPTPASS - If the specified password needs to be encrypted, set the
ENCRYPTPASS flag to YES. Otherwise, set it to NO. By default, the YES option will
be selected.
Note:
If the USEWARFILE flag is set to No, then make sure that the ENCRYPTPASS flag
is also set to No.
8. SSL Indicate whether the SSL (Secured Socket Layer) is to be used to connect
to the WebLogic server.
9. SERVER - The name of the specific server instance to be monitored for a
WebLogic server (the default value is "localhome")
10. URL The URL to be accessed to collect metrics pertaining to the WebLogic
server. By default, this test connects to a managed WebLogic server and
attempts to obtain the metrics of interest by accessing the local Mbeans of the
server. This parameter can be changed to a value of
http://<adminserverIP>:<adminserverPort>. In this case, the test connects
to the WebLogic admin server to collect metrics pertaining to the managed
server (specified by the HOST and PORT). The URL setting provides the
administrator with the flexibility of determining the WebLogic monitoring
configuration to use.
Note:
If the admin server is to be used for collecting measures for all the managed
WebLogic servers, then it is mandatory that the egurkha war file is deployed to
the admin server, and it is up and running.
11. VERSION - The VERSION textbox indicates the version of the Weblogic server
to be managed. The default value is "none", in which case the test auto-
discovers the weblogic version. If the value of this parameter is not "none", the
test uses the value provided (e.g., 7.0) as the weblogic version (i.e., it does not
auto-discover the weblogic server version). This parameter has been added to
address cases when the eG agent is not able to discover the WebLogic server
version.
12. USEWARFILE - This flag indicates whether/not monitoring is to be done using a
Web archive file deployed on the WebLogic server (in which case, HTTP/HTTPS
is used by the server to connect to the server). If this flag is set to No, the agent
directly connects to the WebLogic server using the T3 protocol (no other file
needs to be deployed on the WebLogic server for this to work). Note that the T3
protocol-based support is available for WebLogic servers ver.9 and ver. 10 only. Also, if the
USEWARFILE parameter is set to No, make sure that the ENCRYPTPASS
parameter is set to No as well.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
15
When monitoring a WebLogic server deployed on a Unix platform particularly, if
the USEWARFILE parameter is set to No, you have to make sure that the eG
agent install user is added to the WebLogic users group.
13. WEBLOGICJARLOCATION - Specify the location of the WebLogic server's java
archive (Jar) file. If the USEWARFILE flag is set to No, then the weblogic.jar file
specified here is used to connect to the corresponding WebLogic server using
the T3 protocol. Note that the T3 protocol-based support is available for WebLogic servers
ver.9 and ver. 10 only.
Outputs of the
test
One set of results for each thread execute queue of a WebLogic application server.
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Idle threads:
Indicates the number of
idle threads assigned to
a queue.
Number
If the value of this measure is close to 0,
it indicates a probable delay in the
processing of subsequent requests.
In case of WebLogic 9.x or higher, this measure
will be available for the ThreadPool descriptor
only, and not the individual work managers.
Thread utilization:
Indicates the
percentage of threads
utilized in a queue
Percent
When this value becomes 100 %, it
indicates a heavy load on the server and
that it cannot process further requests
until a few threads become idle.
Typically, this value should be less than
90%.
In case of WebLogic 9.x or higher, this measure
will be available for the ThreadPool descriptor
only, and not the individual work managers.
Pending requests:
Indicates the number of
requests waiting in the
queue
Number
A high value of this measure can result
in significant request processing delays.
Requests:
Indicates the number of
requests that are
processed by the server
per second
Reqs/sec
While a high value of this measure is
indicative of the good health of the
server, a low value indicates a
processing bottleneck.
2.1.2.3 WebLogic Rockit JVM Test
This test exposes runtime data about the JRockit Virtual Machine (VM) that is running the current
WebLogic Server instance.
The WebLogic Rockit JVM test will work only if the following conditions are fulfilled:
The Weblogic Server must be launched on the JRockit JVM
The managementapi.jar should be on the Weblogic server's startup classpath
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
16
Purpose
To report runtime data about the JRockit Virtual Machine (VM) that is running the
current WebLogic Server instance
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
17
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. USER The admin user name of the WebLogic server being monitored.
5. PASSWORD The password of the specified admin user
6. CONFIRM PASSWORD Confirm the password by retyping it here.
7. ENCRYPTPASS - If the specified password needs to be encrypted, set the
ENCRYPTPASS flag to YES. Otherwise, set it to NO. By default, the YES option will
be selected.
Note:
If the USEWARFILE flag is set to No, then make sure that the ENCRYPTPASS flag
is also set to No.
8. SSL Indicate whether the SSL (Secured Socket Layer) is to be used to connect
to the WebLogic server.
9. SERVER - The name of the specific server instance to be monitored for a
WebLogic server (the default value is "localhome")
10. URL The URL to be accessed to collect metrics pertaining to the WebLogic
server. By default, this test connects to a managed WebLogic server and
attempts to obtain the metrics of interest by accessing the local Mbeans of the
server. This parameter can be changed to a value of
http://<adminserverIP>:<adminserverPort>. In this case, the test connects
to the WebLogic admin server to collect metrics pertaining to the managed
server (specified by the HOST and PORT). The URL setting provides the
administrator with the flexibility of determining the WebLogic monitoring
configuration to use.
Note:
If the admin server is to be used for collecting measures for all the managed
WebLogic servers, then it is mandatory that the egurkha war file is deployed to
the admin server, and it is up and running.
11. VERSION - The VERSION textbox indicates the version of the Weblogic server to
be managed. The default value is "none", in which case the test auto-discovers
the weblogic version. If the value of this parameter is not "none", the test uses
the value provided (e.g., 7.0) as the weblogic version (i.e., it does not auto-
discover the weblogic server version). This parameter has been added to
address cases when the eG agent is not able to discover the WebLogic server
version.
12. USEWARFILE - This flag indicates whether/not monitoring is to be done using a
Web archive file deployed on the WebLogic server (in which case, HTTP/HTTPS
is used by the server to connect to the server). If this flag is set to No, the agent
directly connects to the WebLogic server using the T3 protocol (no other file
needs to be deployed on the WebLogic server for this to work). Note that the T3
protocol-based support is available for WebLogic servers ver.9 and ver. 10 only. Also, if the
USEWARFILE parameter is set to No, make sure that the ENCRYPTPASS
parameter is set to No as well.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
18
When monitoring a WebLogic server deployed on a Unix platform particularly, if
the USEWARFILE parameter is set to No, you have to make sure that the eG
agent install user is added to the WebLogic users group.
13. WEBLOGICJARLOCATION - Specify the location of the WebLogic server's java
archive (Jar) file. If the USEWARFILE flag is set to No, then the weblogic.jar file
specified here is used to connect to the corresponding WebLogic server using
the T3 protocol. Note that the T3 protocol-based support is available for WebLogic servers
ver.9 and ver. 10 only.
Outputs of the
test
One set of results for every WebLogic server being monitored.
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Total heap:
Indicates the amount of
memory currently
allocated to the Virtual
Machine's Java heap.
MB
Used heap:
Indicates the amount of
Java heap memory that
is currently being used
by the Virtual Machine.
MB
If the value of this measure increases
consistently, it is indicative of heavy load
on the Virtual Machine.
Free heap:
Indicates the amount
of Java heap memory
that is currently free in
the Virtual Machine.
MB
A very low value of this measure is a
cause of concern, as it indicates a heavy
utilization of the JVM heap. Consider
increasing the JVM heap size under such
circumstances.
Total nursery:
Indicates the amount of
memory that is
currently allocated to
the nursery.
MB
GC count:
Indicates the number of
garbage collection runs
that have occurred
since the Virtual
Machine was started.
Number
If GC has run too many times during a
short interval, it indicates that the JVM is
in dire need of free heap for normal
functioning. Moreover, frequent GC
executions could cause application
performance to deteriorate. In order to
avoid this, it is recommended that you
increase the heap size or alter the GC
frequency.
GC time:
Indicates the time that
the Virtual Machine has
spent on all garbage
collection runs since the
VM was started.
Secs
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
19
Total load:
Indicates the
percentage of load that
the Virtual Machine is
placing on all
processors in the host
computer.
Percent
Percent heap used:
Indicates the
percentage of the total
Java heap memory that
is currently being used
by the Virtual Machine.
Percent
2.1.2.4 WebLogic Work Managers Test
The WebLogic Server allows you to configure how your application prioitizes the execution of its work
based on rules you define and by monitoring actual runtime performance. You define the rules and
constraints for your application by defining a Work Manager and applying it either globally to WebLogic
Server domain or to a specific application component.
This test monitors the requests to applications, and helps analyze how the work manager mapped to
each application is managing the requests. By closely observing the variations to the measures
reported by this test, you can quickly identify current/potential application slowdowns, and figure out
whether changes in the corresponding work manager specification can improve application
performance.
Purpose
Monitors the requests to applications, and helps analyze how the work manager
mapped to each application is managing the requests
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
20
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. USER The admin user name of the WebLogic server being monitored.
5. PASSWORD The password of the specified admin user
6. CONFIRM PASSWORD Confirm the password by retyping it here.
7. ENCRYPTPASS - If the specified password needs to be encrypted, set the
ENCRYPTPASS flag to YES. Otherwise, set it to NO. By default, the YES option will
be selected.
Note:
If the USEWARFILE flag is set to No, then make sure that the ENCRYPTPASS flag
is also set to No.
8. SSL Indicate whether the SSL (Secured Socket Layer) is to be used to connect
to the WebLogic server.
9. SERVER - The name of the specific server instance to be monitored for a
WebLogic server (the default value is "localhome")
10. URL The URL to be accessed to collect metrics pertaining to the WebLogic
server. By default, this test connects to a managed WebLogic server and
attempts to obtain the metrics of interest by accessing the local Mbeans of the
server. This parameter can be changed to a value of
http://<adminserverIP>:<adminserverPort>. In this case, the test connects
to the WebLogic admin server to collect metrics pertaining to the managed
server (specified by the HOST and PORT). The URL setting provides the
administrator with the flexibility of determining the WebLogic monitoring
configuration to use.
Note:
If the admin server is to be used for collecting measures for all the managed
WebLogic servers, then it is mandatory that the egurkha war file is deployed to
the admin server, and it is up and running.
11. VERSION - The VERSION textbox indicates the version of the Weblogic server to
be managed. The default value is "none", in which case the test auto-discovers
the weblogic version. If the value of this parameter is not "none", the test uses
the value provided (e.g., 7.0) as the weblogic version (i.e., it does not auto-
discover the weblogic server version). This parameter has been added to
address cases when the eG agent is not able to discover the WebLogic server
version.
12. USEWARFILE - This flag indicates whether/not monitoring is to be done using a
Web archive file deployed on the WebLogic server (in which case, HTTP/HTTPS
is used by the server to connect to the server). If this flag is set to No, the agent
directly connects to the WebLogic server using the T3 protocol (no other file
needs to be deployed on the WebLogic server for this to work). Note that the T3
protocol-based support is available for WebLogic servers ver.9 and ver. 10 only. Also, if the
USEWARFILE parameter is set to No, make sure that the ENCRYPTPASS
parameter is set to No as well.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
21
13. When monitoring a WebLogic server deployed on a Unix platform particularly, if
the USEWARFILE parameter is set to No, you have to make sure that the eG
agent install user is added to the WebLogic users group.
14. WEBLOGICJARLOCATION - Specify the location of the WebLogic server's java
archive (Jar) file. If the USEWARFILE flag is set to No, then the weblogic.jar file
specified here is used to connect to the corresponding WebLogic server using
the T3 protocol. Note that the T3 protocol-based support is available for WebLogic servers
ver.9 and ver. 10 only.
Outputs of the
test
One set of results for every work manager on the WebLogic server being monitored.
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Completed requests:
Indicates the number of
requests that were
successfully serviced by
the work manager
mapped to this
application.
Number
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
22
Pending requests:
Indicates the number of
requests to this
application that are
waiting in the queue.
Number
A large number of pending requests to
an application could indicate a
bottleneck in the request processing
ability of that application. If too many
applications on the server support long-
winding request queues, it can
ultimately overload the server, and
eventually choke its performance. It is
therefore essential to quickly isolate
those applications that could be
experiencing issues with request
processing, and then initiate the relevant
remedial action on them. Comparing the
value of this measure across applications
will enable you to accurately identify
which application has the maximum
number of pending requests. Once the
application is spotted, you may want to
observe the variations in the pending
requests count over time for that
application. If you find that the value of
this measure keeps increasing with time
for that application, further investigation
may be necessary to determine the
reasons for the same. One of the
possible reasons for this could be the
lack of sufficient threads. Incoming
requests to an application cannot be
processed if adequate threads are
unavailable; such requests will hence be
in queue until such time that the server
allocates more threads to the
application.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
23
As already stated, the WebLogic server
prioritizes work and allocates threads to
an application based on the rules and
constraints defined within the work
manager that is either defined globally
or mapped specifically to that
application. Therefore, in the event of a
slow down in the request processing rate
of an application, you can consider fine-
tuning its associated work manager
definition, so as to ensure the
uninterrupted processing of requests. A
typical work manager definition should
include one request class and one/more
thread constraints. A request class
expresses a scheduling guideline that
WebLogic Server uses to allocate
threads to requests. Request classes
help ensure that high priority work is
scheduled before less important work,
even if the high priority work is
submitted after the lower priority work.
A work manager can specify any one of
the below-mentioned request classes:
Fair share request class: This
specifies the average thread-use
time required to process requests.
For example, assume that WebLogic
Server is running two modules. The
Work Manager for ModuleA specifies
a fair-share-request-class of 80 and
the Work Manager for ModuleB
specifies a fair-share-request-class
of 20. During a period of sufficient
demand, with a steady stream of
requests for each module such that
the number requests exceed the
number of threads, WebLogic Server
will allocate 80% and 20% of the
thread-usage time to ModuleA and
ModuleB, respectively.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
24
Response time request class: This
type of request class specifies a
response time goal in milliseconds.
Response time goals are not applied
to individual requests. Instead,
WebLogic Server computes a
tolerable waiting time for requests
with that class by subtracting the
observed average thread use time
from the response time goal, and
schedules requests so that the
average wait for requests with the
class is proportional to its tolerable
waiting time.
Context request class: This type of
request class assigns request classes
to requests based on context
information, such as the current user
or the current user’s group.
A constraint defines minimum and
maximum numbers of threads allocated
to execute requests and the total
number of requests that can be queued
or executing before WebLogic Server
begins rejecting requests. You can define
the following types of constraints:
max-threads-constraintThis
constraint limits the number of
concurrent threads executing
requests from the constrained work
set. The default is unlimited. For
example, consider a constraint
defined with maximum threads of 10
and shared by 3 entry points. The
scheduling logic ensures that not
more than 10 threads are executing
requests from the three entry points
combined.
min-threads-constraintThis
constraint guarantees a number of
threads the server will allocate to
affected requests to avoid deadlocks.
The default is zero.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
25
A min-threads-constraint value
of one is useful, for example, for
a replication update request,
which is called synchronously
from a peer.
capacityThis constraint causes
the server to reject requests
only when it has reached its
capacity. The default is -1. Note
that the capacity includes all
requests, queued or executing,
from the constrained work set.
Work is rejected either when an
individual capacity threshold is
exceeded or if the global
capacity is exceeded. This
constraint is independent of the
global queue threshold.
Stuck threads:
Indicates the number of
threads that are
considered to be stuck
on the basis of any
thread constraints.
Number
WebLogic Server diagnoses a thread as
stuck if it is continually working (not
idle) for a set period of time. You can
tune a server's thread detection
behavior by changing the length of time
before a thread is diagnosed as stuck,
and by changing the frequency with
which the server checks for stuck
threads.
In response to stuck threads, you can
define a Stuck Thread Work Manager
component that can shut down the Work
Manager, move the application into
admin mode, or mark the server
instance as failed.
2.1.2.5 WebLogic Thread Pools Test
Starting from WebLogic server release 9.0, every server instance uses a self-tuned thread-pool. All
requests, whether related to system administration or application activityare processed by this
single thread pool. The self-tuning thread pool would also adjust its pool size automatically based on
the throughput history that WLS gathers every 2 seconds and based on queue size.
This test monitors how the self-tuning thread pool is being used, and in the process reports whether
there are adequate idle threads in the pool to handle additional workload that may be imposed on the
WebLogic server. The test also turns the spot light on the request (if any) that is hogging threads, and
enables you to quickly capture a sudden/consistent increase in queue size, which in turn might impact
the pool size.
Purpose
Monitors how the self-tuning thread pool is being used, and in the process reports
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
26
whether there are adequate idle threads in the pool to handle additional workload
that may be imposed on the WebLogic server
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
27
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. USER The admin user name of the WebLogic server being monitored.
5. PASSWORD The password of the specified admin user
6. CONFIRM PASSWORD Confirm the password by retyping it here.
7. ENCRYPTPASS - If the specified password needs to be encrypted, set the
ENCRYPTPASS flag to YES. Otherwise, set it to NO. By default, the YES option will
be selected.
Note:
If the USEWARFILE flag is set to No, then make sure that the ENCRYPTPASS flag
is also set to No.
8. SSL Indicate whether the SSL (Secured Socket Layer) is to be used to connect
to the WebLogic server.
9. SERVER - The name of the specific server instance to be monitored for a
WebLogic server (the default value is "localhome")
10. URL The URL to be accessed to collect metrics pertaining to the WebLogic
server. By default, this test connects to a managed WebLogic server and
attempts to obtain the metrics of interest by accessing the local Mbeans of the
server. This parameter can be changed to a value of
http://<adminserverIP>:<adminserverPort>. In this case, the test connects
to the WebLogic admin server to collect metrics pertaining to the managed
server (specified by the HOST and PORT). The URL setting provides the
administrator with the flexibility of determining the WebLogic monitoring
configuration to use.
Note:
If the admin server is to be used for collecting measures for all the managed
WebLogic servers, then it is mandatory that the egurkha war file is deployed to
the admin server, and it is up and running.
11. VERSION - The VERSION textbox indicates the version of the Weblogic server to
be managed. The default value is "none", in which case the test auto-discovers
the weblogic version. If the value of this parameter is not "none", the test uses
the value provided (e.g., 7.0) as the weblogic version (i.e., it does not auto-
discover the weblogic server version). This parameter has been added to
address cases when the eG agent is not able to discover the WebLogic server
version.
12. USEWARFILE - This flag indicates whether/not monitoring is to be done using a
Web archive file deployed on the WebLogic server (in which case, HTTP/HTTPS
is used by the server to connect to the server). If this flag is set to No, the agent
directly connects to the WebLogic server using the T3 protocol (no other file
needs to be deployed on the WebLogic server for this to work). Note that the T3
protocol-based support is available for WebLogic servers ver.9 and ver. 10 only. Also, if the
USEWARFILE parameter is set to No, make sure that the ENCRYPTPASS
parameter is set to No as well.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
28
13. When monitoring a WebLogic server deployed on a Unix platform particularly, if
the USEWARFILE parameter is set to No, you have to make sure that the eG
agent install user is added to the WebLogic users group.
14. WEBLOGICJARLOCATION - Specify the location of the WebLogic server's java
archive (Jar) file. If the USEWARFILE flag is set to No, then the weblogic.jar file
specified here is used to connect to the corresponding WebLogic server using
the T3 protocol. Note that the T3 protocol-based support is available for WebLogic servers
ver.9 and ver. 10 only.
Outputs of the
test
One set of results for the self-tuning thread pool on the WebLogic server being
monitored.
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Active threads:
Indicates the total
number of active
threads in this pool.
Number
A high value for this measure is
indicative of a high load on the
applications deployed on the WebLogic
server.
This measure is also useful for
determining usage trends. For example,
it can show the time of day and the day
of the week in which you usually reach
peak thread count. In addition, the
creation of too many threads can result
in out of memory errors or thrashing. By
watching this metric, you can reduce
excessive memory consumption before
it’s too late.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
29
Hogging threads:
Indicates the number of
threads that are
currently hogged by a
request.
Number
Ideally, the value of this measure should
be low. A very high value indicates that
a request is using up too many threads.
Hogging threads will either be declared
as stuck after the configured timeout or
will return to the pool before that. The
self-tuning mechanism will backfill if
necessary.
WebLogic Server automatically detects
when a thread in a pool becomes
“stuck.” Because a stuck thread cannot
complete its current work or accept new
work, the server logs a message each
time it diagnoses a stuck thread.
WebLogic Server diagnoses a thread as
stuck if it is continually working (not
idle) for a set period of time. You can
tune a server’s thread detection
behavior by changing the length of time
before a thread is diagnosed as stuck,
and by changing the frequency with
which the server checks for stuck
threads. Although you can change the
criteria WebLogic Server uses to
determine whether a thread is stuck,
you cannot change the default behavior
of setting the “warning” and “critical”
health states when all threads in a
particular execute queue become stuck.
Idle threads:
Indicates the number of
idle threads (i.e., the
threads that are ready
to process a new job as
and when it arrives) in
the pool.
Number
A high value is desired for this measure.
Queue length:
Indicates the number of
pending requests in the
priority queue.
Number
This measure comprises of both the
internal system requests and requests
made by the user.
A low value is desired for this measure.
A high value or a sudden increase in this
value may indicate a sudden slowdown
in responsiveness or a performance
bottleneck.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
30
Standby threads:
Indicates the number of
threads that are
currently in the standby
pool.
Number
Threads that are not needed to handle
the present work load are designated as
standby and are added to the standby
pool. These threads are activated when
more threads are needed.
Throughput:
Indicates the number of
requests in the priority
queue that are
completed.
Number
The queue monitors throughput over
time and based on history, determines
whether to adjust the thread count or
not. For example, if historical throughput
statistics indicate that a higher thread
count increased throughput, the server
increases it. Similarly, if statistics
indicate that fewer threads did not
reduce throughput, the count will be
reduced.
Total threads:
Indicates the total
number of threads in
this pool.
Number
2.1.2.6 Tests Disabled by Default for the JVM Layer
The tests discussed above are enabled by default for a WebLogic server. Besides these tests, the eG
agent can be optionally configured to execute a few other tests on the WebLogic server’s JVM so as to
report critical statistics related to the Java transactions, classes loaded/unloaded, threads used, CPU
and memory resources used, garbage collection activity, uptime of the JVM, etc. These additional tests
are disabled by default for the WebLogic server. To enable one/more tests, go to the ENABLE / DISABLE
TESTS page using the menu sequence : Agents -> Tests -> Enable/Disable, pick WebLogic as the
Component type, Performance as the Test type, choose the tests from the DISABLED TESTS list, and click on
the >> button to move the tests to the ENABLED TESTS list. Finally, click the Update button.
These JVM tests have been discussed below.
When a user initiates a transaction to a Java-based web application, the transaction typically travels
via many Java components before completing execution and sending out a response to the user.
Figure 2.4 reveals some of the Java components that a web transaction/web request visits during its
journey.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
31
Figure 2.4: The layers through which a Java transaction passes
The key Java components depicted by Figure 2.4 have been briefly described below:
Filter: A filter is a program that runs on the server before the servlet or JSP page with which
it is associated. All filters must implement javax.servlet.Filter. This interface comprises three
methods: init, doFilter, and destroy.
Servlet: A servlet acts as an intermediary between the client and the server. As servlet
modules run on the server, they can receive and respond to requests made by the client. If a
servlet is designed to handle HTTP requests, it is called an HTTP Servlet.
JSP: Java Server Pages are an extension to the Java servlet technology. A JSP is translated
into Java servlet before being run, and it processes HTTP requests and generates responses
like any servlet. Translation occurs the first time the application is run.
Struts: The Struts Framework is a standard for developing well-architected Web applications.
Based on the Model-View-Controller (MVC) design paradigm, it distinctly separates all three
levels (Model, View, and Control).
A delay experienced by any of the aforesaid Java components can adversely impact the total response
time of the transaction, thereby scarring the user experience with the web application. In addition,
delays in JDBC connectivity and slowdowns in SQL query executions (if the application interacts with a
database), bottlenecks in delivery of mails via the Java Mail API (if used), and any slow method calls,
can also cause insufferable damage to the 'user-perceived' health of a web application.
The challenge here for administrators is to not just isolate the slow transactions, but to also accurately
identify where the transaction slowed down and why - is it owing to inefficent JSPs? poorly written
servlets or struts? poor or the lack of any JDBC connectivity to the database? long running queries?
inefficient API calls? or delays in accessing the POJO methods? The eG JTM Monitor provides
administrators with answers to these questions!
With the help of the Java Transactions test, the eG JTM Monitor traces the route a configured web
transaction takes, and captures live the total responsiveness of the transaction and the response time
of each Java component it visits en route. This way, the solution proactively detects transaction
slowdowns, and also precisely points you to the Java components causing it - is it the Filters? JSPs?
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
32
Servlets? Struts? JDBC? SQL query? Java Mail API? or the POJO? In addition to revealing where (i.e.,
at which Java component) a transaction slowed down, the solution also provides the following
intelligent insights, on demand, making root-cause identification and resolution easier:
A look at the methods that took too long to execute, thus leading you to those methods that
may have contributed to the slowdown;
Single-click access to each invocation of a chosen method, which provides pointers to when
and where a method spent longer than desired;
A quick glance at SQL queries and Java errors that may have impacted the responsiveness of
the transaction;
Using these interesting pointers provided by the eG JTM Monitor, administrators can diagnose the root-
cause of transaction slowdowns within minutes, rapidly plug the holes, and thus ensure that their
critical web applications perform at peak capacity at all times!
Before attempting to monitor Java transactions using the eG JTM Monitor, the following configurations
will have to be performed:
1. In the <EG_INSTALL_DIR>\lib directory (on Windows; on Unix, this will be /opt/egurkha/lib) of the eG
agent, you will find the following files:
eg_jtm.jar
aspectjrt.jar
aspectjweaver.jar
jtmConn.props
jtmLogging.props
jtmOther.props
2. Login to the system hosting the Java application to be monitored.
3. If the eG agent will be 'remotely monitoring' the target Java application (i.e., if the Java
application is to be monitored in an 'agentless manner'), then, copy all the files mentioned above
from the <EG_INSTALL_DIR>\lib directory (on Windows; on Unix, this will be /opt/egurkha/lib) of the eG
agent to any location on the Java application host.
4. Then, proceed to edit the start-up script of the Java application being monitored, and append the
following lines to it:
set JTM_HOME=<<PATH OF THE LOCAL FOLDER CONTAINING THE JAR FILES AND PROPERTY FILES
LISTED ABOVE>>
"-javaagent:%JTM_HOME%\aspectjweaver.jar"
"-DEG_JTM_HOME=%JTM_HOME%"
Note that the above lines will change based on the operating system and the web/web application server being
monitored.
Then, add the eg_jtm.jar, aspectjrt.jar, and aspectjweaver.jar files to the CLASSPATH of the Java
application being monitored.
Finally, save the file. Once this is done, then, the next time the Java application starts, the eG JTM
Monitor scans the web requests to the application for configured URL patterns. When a match is
found, the eG JTM Monitor collects the desired metrics and stores them in memory.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
33
Then, every time the eG agent runs the Java Transactions test, the agent will poll the eG JTM Monitor
(on the target application) for the required metrics, extract the same from the application's
memory, and report them to the eG manager.
5. Next, edit the jtmConn.props file. You will find the following lines in the file:
#Contains the connection properties of eGurkha Java Transaction Monitor
JTM_Port=13631
Designated_Agent=
By default, the JTM_Port parameter is set to 13631. If the Java application being monitored listens
on a different JTM port, then specify the same here. In this case, when managing a Java Application
using the eG administrative interface, specify the JTM_Port that you set in the jtmConn.props file as
the Port of the Java application.
Also, against the Designated_Agent parameter, specify the IP address of the eG agent which will poll
the eG JTM Monitor for metrics. If no IP address is provided here, then the eG JTM Monitor will treat
the host from which the very first 'measure request' comes in as the Designated_Agent.
6. Finally, save the jtmConn.props file.
Then, proceed to configure the Java Transactions test as discussed below.
Purpose
Traces the route a configured web transaction takes, and captures live the total
responsiveness of the transaction and the response time of each component it visits
en route. This way, the solution proactively detects transaction slowdowns, and also
precisely points you to the Java component causing it - is it the Filters? JSPs?
Servlets? Struts? JDBC? SQL query? Java Mail API? or the POJO?
Target of the
test
A Java application/web application server
Agent
deploying the
test
An internal/remote agent
Note:
In case a specific Designated_Agent is not provided, and the eG JTM Monitor treats the host from
which the very first 'measure request' comes in as the Designated_Agent, then if such a
Designated_Agent is stopped or uninstalled for any reason, the eG JTM Monitor will wait for a
maximum of 10 measure periods for that 'deemed' Designated_Agent to request for metrics. If
no requests come in for 10 consecutive measure periods, then the eG JTM Monitor will begin
responding to 'measure requests' coming in from any other eG agent.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
34
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST - The host for which the test is to be configured
3. PORT - The port number at which the specified HOST listens; if Java Transaction
Monitoring is enabled for the target Java application, then the JTM PORT has to
be specified here
4. JTM PORT Specify the port number configured as the JTM_Port in the
jtmConn.props file described in the procedure outlined above.
5. URL PATTERNS - Provide a comma-separated list of the URL patterns of web
requests/transactions to be monitored. The format of your specification should
be as follows: <DisplayName_of_Pattern>:<Transaction_Pattern>. For instance,
your specification can be: login:*log*,ALL:*,pay:*pay*
6. FILTERED URL PATTERNS - Provide a comma-separated list of the URL patterns
of transactions/web requests to be excluded from the monitoring scope of this
test. For example, *blog*,*paycheque*
7. SLOW URL THRESHOLD - The Slow transactions measure of this test will report
the number of transactions (of the configured patterns) for which the response
time is higher than the value (in seconds) specified here.
8. METHOD EXEC CUTOFF - The detailed diagnosis of the Slow transactions
measure allows you to drill down to a URL tree, where the methods invoked by a
chosen transaction are listed in the descending order of their execution time. By
configuring an execution duration (in seconds) here, you can have the URL Tree
list only those methods that have been executing for a duration greater the
specified value. For instance, if you specify 5 here, the URL tree for a
transaction will list only those methods that have been executing for over 5
seconds, thus shedding light on the slow method calls alone.
9. MAX SLOW URLS PER TEST PERIOD - Specify the number of top-n transactions
(of a configured pattern) that should be listed in the detailed diagnosis of the
Slow transactions measure, every time the test runs. By default, this is set to
10, indicating that the detailed diagnosis of the Slow transactions measure will
by default list the top-10 transactions, arranged in the descending order of their
response times.
10. MAX ERROR URLS PER TEST PERIOD - Specify the number of top-n transactions
(of a configured pattern) that should be listed in the detailed diagnosis of the
Error transactions measure, every time the test runs. By default, this is set to
10, indicating that the detailed diagnosis of the Error transactions measure will
by default list the top-10 transactions, in terms of the number of errors they
encountered.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
35
11. DD FREQUENCY - Refers to the frequency with which detailed
diagnosis measures are to be generated for this test. The default is 1:1. This
indicates that, by default, detailed measures will be generated every time this
test runs, and also every time the test detects a problem. You can modify this
frequency, if you so desire. Also, if you intend to disable the detailed diagnosis
capability for this test, you can do so by specifying none against DD
FREQUENCY.
12. DETAILED DIAGNOSIS - To make diagnosis more efficient and accurate, the eG
Enterprise suite embeds an optional detailed diagnostic capability. With this
capability, the eG agents can be configured to run detailed, more elaborate tests
as and when specific problems are detected. To enable the detailed diagnosis
capability of this test for a particular server, choose the On option. To disable
the capability, click on the Off option.
The option to selectively enable/disable the detailed diagnosis capability will be
available only if the following conditions are fulfilled:
The eG manager license should allow the detailed diagnosis capability
Both the normal and abnormal frequencies configured for the detailed
diagnosis measures should not be 0.
Outputs of the
test
One set of results for each configured URL pattern
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Total transactions:
Indicates the total number of
transactions of this pattern that
the target application handled
during the last measurement
period.
Number
Avg. response time:
Indicates the average time taken
by the transactions of this
pattern to complete execution.
Secs
Compare the value of this
measure across patterns to
isolate the type of transactions
that were taking too long to
execute.
You can then take a look at the
values of the other measures to
figure out where the transaction
is spending too much time.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
36
Slow transactions:
Indicates the number of
transactions of this pattern that
were slow during the last
measurement period.
Number
This measure will report the
number of transactions with a
response time higher than the
configured SLOW URL
THRESHOLD.
A high value is a cause for
concern, as too many slow
transactions to an application can
significantly damage the user
experience with that application.
Use the detailed diagnosis of this
measure to know which
transactions are slow.
Slow transactions response
time:
Indicates the average time taken
by the slow transactions of this
pattern to execute.
Secs
Error transactions:
Indicates the number of
transactions of this pattern that
experienced errors during the
last measurement period.
Number
A high value is a cause for
concern, as too many error-prone
transactions to an application can
significantly damage the user
experience with that application.
Use the detailed diagnosis of this
measure to isolate the error
transactions.
Error transactions response
time:
Indicates the average duration
for which the transactions of this
pattern were processed before
an error condition was detected.
Secs
Filters:
Indicates the number of filters
that were accessed by the
transactions of this pattern
during the last measurement
period.
Number
A filter is a program that runs on
the server before the servlet or
JSP page with which it is
associated.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
37
Filters response time:
Indicates the average time spent
by the transactions of this
pattern at the Filters layer.
Secs
Typically, the init, doFilter, and
destroy methods are called at
the Filters layer. Issues in these
method invocations can increase
the time spent by a transaction in
the Filters Java component.
Compare the value of this
measure across patterns to
identify the transaction pattern
that spent the maximum time
with the Filters component.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
JSPs accessed:
Indicates the number of JSPs
accessed by the transactions of
this pattern during the last
measurement period.
Number
JSPs response time:
Indicates the average time spent
by the transactions of this
pattern at the JSP layer.
Secs
Compare the value of this
measure across patterns to
identify the transaction pattern
that spent the maximum time in
JSPs.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs..
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
38
HTTP Servlets Accessed:
Indicates the number of HTTP
servlets that were accessed by
the transactions of this pattern
during the last measurement
period.
Number
HTTP servlets response time:
Indicates the average time taken
by the HTTP servlets for
processing the HTTP requests of
this pattern.
Secs
Badly written servlets can take
too long to execute, and can
hence obstruct the smooth
execution of the dependent
transactions.
By comparing the value of this
measure across patterns, you can
figure out which transaction
pattern is spending the maximum
time in Servlets.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
Generic servlets accessed:
Indicates the number of generic
(non-HTTP) servlets that were
accessed by the transactions of
this pattern during the last
measurement period.
Number
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
39
Generic servlets response
time:
Indicates the average time taken
by the generic (non-HTTP)
servlets for processing
transactions of this pattern.
Secs
Badly written servlets can take
too long to execute, and can
hence obstruct the smooth
execution of the dependent
transactions.
By comparing the value of this
measure across patterns, you can
figure out which transaction
pattern is spending the maximum
time in Servlets.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
JDBC queries:
Indicates the number of JDBC
statements that were executed
by the transactions of this
pattern during the last
measurement period.
Number
The methods captured by the eG
JTM Monitor from the Java class for
the JDBC sub-component include:
Commit(), rollback(..),
close(),GetResultSet(),
executeBatch(), cancel(),
connect(String,
Properties),
getConnection(..),getPool
edConnection(..)
JDBC response time:
Indicates the average time taken
by the transactions of this
pattern to execute JDBC
statements.
Secs
By comparing the value of this
measure across patterns, you can
figure out which transaction
pattern is taking the most time to
execute JDBC queries.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
40
SQL statements executed:
Indicates the number of SQL
queries executed by the
transactions of this pattern
during the last measurement
period.
Number
SQL statement time avg.:
Indicates the average time taken
by the transactions of this
pattern to execute SQL queries.
Secs
Inefficient queries can take too
long to execute on the database,
thereby significantly delaying the
responsiveness of the dependent
transactions. To know which
transactions have been most
impacted by such queries,
compare the value of this
measure across the transaction
patterns.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - at
the filters layer, JSPs layer,
servlets layer, struts layer, in
exception handling, when
executing JDBC/SQL queries,
when sending Java mails, or
when accessing POJOs.
Exceptions seen:
Indicates the number of
exceptions encountered by the
transactions of this pattern
during the last measurement
period.
Number
Ideally, the value of this measure
should be 0.
Exceptions response time:
Indicates the average time which
the transactions of this pattern
spent in handling exceptions.
Secs
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - at
the filters layer, JSPs layer,
servlets layer, struts layer, in
exception handling, when
executing JDBC/SQL queries,
when sending Java mails, or
when accessing POJOs.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
41
Struts accessed:
Indicates the number of struts
accessed by the transactions of
this pattern during the last
meaurement period.
Number
The Struts framework is a
standard for developing well-
architected Web applications.
Struts response time:
Indicates the average time spent
by the transactions of this
pattern at the Struts layer.
Secs
If you compare the value of this
measure across patterns, you can
figure out which transaction
pattern spent the maximum time
in Struts.
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
Java mails:
Indicates the number of mails
sent by the transactions of this
pattern during the last
measurement period, using the
Java mail API.
Number
The eG JTM Monitor captures any
mail that has been sent from the
monitored application using Java
Mail API. Mails sent using other
APIs are ignored by the eG JTM
Monitor.
Java mail API time:
Indicates the average time taken
by the transactions of this
pattern to send mails using the
Java mail API.
Secs
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
42
POJOs:
Indicates the number of
transactions of this pattern that
accessed POJOs during the last
measurement period.
Number
Plain Old Java Object (POJO)
refers to a 'generic' method in
JAVA Language. All methods that
are not covered by any of the
Java components (eg., JSPs,
Struts, Servlets, Filters,
Exceptions, Queries, etc.)
discussed above will be
automatically included under
POJO.
When reporting the number of
POJO methods, the eG agent will
consider only those methods with
a response time value that is
higher than the threshold limit
configured against the METHOD
EXEC CUTOFF parameter.
POJO avg. access time:
Indicates the average time taken
by the transactions of this
pattern to access POJOs.
Secs
If one/more transactions of a
pattern are found to be slow,
then, you can compare the value
of this measure with the other
response time values reported by
this test to determine where the
slowdown actually occurred - in
the filters, in JSPs, in servlets, in
struts, in exception handling,
when executing JDBC/SQL
queries, when sending Java mails,
or when accessing POJOs.
The detailed diagnosis of the Slow transactions measure lists the top-10 (by default) transactions of a
configured pattern that have violated the response time threshold set using the SLOW URL
THRESHOLD parameter of this test. Against each transaction, the date/time at which the transaction
was initiated/requested will be displayed. Besides the request date/time, the remote host from which
the transaction request was received and the total response time of the transaction will also be
reported. This response time is the sum total of the response times of each of the top methods (in
terms of time taken for execution) invoked by that transaction. To compute this sum total, the test
considers only those methods with a response time value that is higher than the threshold limit
configured against the METHOD EXEC CUTOFF parameter.
In the detailed diagnosis, the transactions will typically be arranged in the descending order of the
total response time; this way, you would be able to easily spot the slowest transaction. To know what
caused the transaction to be slow, you can take a look at the SUBCOMPONENT DETAILS column of the
detailed diagnosis. Here, the time spent by the transaction in each of the Java components (FILTER,
STRUTS, SERVLETS, JSPS, POJOS, SQL, JDBC, etc.) will be listed, thus leading you to the exact Java component
where the slowdown occurred.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
43
Figure 2.5: The detailed diagnosis of the Slow transactions measure
You can even perform detailed method-level analysis to isolate the methods taking too long to
execute. For this, click on the URL Tree link. Figure 2.6 will then appear. In the left panel of Figure 2.6,
you will find the list of transactions that match a configured pattern; these transactions will be sorted
in the descending order of their Total Response Time (by default). This is indicated by the Total
Response Time option chosen by default from the Sort by list in Figure 2.6. If you select a transaction
from the left panel, an At-A-Glance tab page will open by default in the right panel, providing quick, yet
deep insights into the performance of the chosen transaction and the reasons for its slowness. This
tab page begins by displaying the URL of the chosen transaction, the total Response time of the
transaction, the time at which the transaction was last requested, and the Remote Host from which the
request was received.
If the Response time appears to be very high, then you can take a look at the Method Level Breakup
section to figure out which method called by which Java component (such as FILTER, STRUTS, SERVLETS,
JSPS, POJOS, SQL, JDBC, etc.) could have caused the slowdown. This section provides a horizontal bar
graph, which reveals the percentage of time for which the chosen transaction spent executing each of
the top methods (in terms of execution time) invoked by it. The legend below clearly indicates the top
methods and the layer/sub-component that invoked each method. Against every method, the number
of times that method was invoked in the Measurement Time, the Duration (in Secs) for which the method
executed, and the percentage of the total execution time of the transaction for which the method was
in execution will be displayed, thus quickly pointing you to those methods that may have contributed
to the slowdown. The methods displayed here and featured in the bar graph depend upon the METHOD
EXEC CUTOFF configuration of this test - in other words, only those methods with an execution
duration that exceeds the threshold limit configured against METHOD EXEC CUTOFF will be displayed
in the Method Level Breakup section.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
44
Figure 2.6: The Method Level Breakup section in the At-A-Glance tab page
While the Method Level Breakup section provides method-level insights into responsiveness, for a sub-
component or layer level breakup of responsiveness scroll down the At-A-Glance tab to view the
Component Level Breakup section (see Figure 2.7). Using this horizontal bar graph, you can quickly tell
where - i.e., in which Java component - the transaction spent the maximum time. A quick glance at
the graph's legend will reveal the Java components the transaction visited, the number of methods
invoked by Java component, the Duration (Secs) for which the transaction was processed by the Java
component, and what Percentage of the total transaction response time was spent in the Java
component.
Figure 2.7: The Component Level Breakup section in the At-A-Glance tab page
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
45
Besides Java methods, where the target Java application interacts with the database, long-running
SQL queries can also contribute to the poor responsiveness of a transaction. You can use the At-A-
Glance tab page to determine whether the transaction interacts with the database or not, and if so,
how healthy that interaction is. For this, scroll down the At-A-Glance tab page.
Figure 2.8: Query Details in the At-A-Glance tab page
Upon scrolling, you will find query details below the Component Level Breakup section. All the SQL queries
that the chosen transaction executes on the backend database will be listed here in the descending
order of their Duration. Corresponding to each query, you will be able to view the number of times that
query was executed, the Duration for which it executed, and what percentage of the total transaction
response time was spent in executing that query. A quick look at this tabulation would suffice to
identify the query which executed for an abnormally long time on the database, causing the
transaction's responsiveness to suffer. For a detailed query description, click on the query. Figure 2.9
will then pop up displaying the complete query and its execution duration.
Figure 2.9: Detailed description of the query clicked on
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
46
This way, the At-A-Glance tab page allows you to analyze, at-a-glance, all the factors that can influence
transaction response time - be it Java methods, Java components, and SQL queries - and enables you
to quickly diagnose the source of a transaction slowdown. If, for instance, you figure out that a
particular Java method is responsible for the slowdown, you can zoom into the performance of the
'suspect method' by clicking on that method in the Method Level Breakup section of the At-A-Glance tab
page. This will automatically lead you to the Trace tab page, where all invocations of the chosen
method will be highlighted (see Figure 2.10).
Figure 2.10: The Trace tab page displaying all invocations of the method chosen from the Method Level Breakup
section
Typically, clicking on the Trace tab page will list all the methods invoked by the chosen transaction,
starting with the very first method. Methods and sub-methods (a method invoked within a method)
are arranged in a tree-structure, which can be expanded or collapsed at will. To view the sub-methods
within a method, click on the arrow icon that precedes that method in the Trace tab page. Likewise, to
collapse a tree, click once again on the arrow icon. Using the tree-structure, you can easily trace the
sequence in which methods are invoked by a transaction.
If a method is chosen for analysis from the Method Level Breakup section of the At-A-Glance tab page, the
Trace tab page will automatically bring your attention to all invocations of that method by highlighting
them (as shown by Figure 2.10). Likewise, if a Java component is clicked in the Component Level
Breakup section of the At-A-Glance section, the Trace tab page will automatically appear, displaying all
the methods invoked from the chosen Java component (as shown by Figure 2.11).
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
47
Figure 2.11: The Trace tab page displaying all methods invoked at the Java layer/sub-component chosen from the
Component Level Breakup section
For every method, the Trace tab page displays a Request Processing bar, which will accurately indicate
when, in the sequence of method invocations, the said method began execution and when it ended;
with the help of this progress bar, you will be able to fairly judge the duration of the method, and also
quickly tell whether any methods were called prior to the method in question. In addition, the Trace tab
page will also display the time taken for a method to execute (Method Execution Time) and the
percentage of the time the transaction spent in executing that method. The most time-consuming
methods can thus be instantly isolated.
The Trace tab page also displays the Total Execution Time for each method - this value will be the same
as the Method Execution Time for 'stand-alone' methods - i.e., methods without any sub-methods. In the
case of methods with sub-methods however, the Total Execution Time will be the sum total of the Method
Execution Time of each sub-method invoked within. This is because, a 'parent' method completes
execution only when all its child/sub-methods finish executing.
With the help of the Trace tab page therefore, you can accurately trace the method that takes the
longest to execute, when that method began execution, and which 'parent method' (if any) invoked
the method.
Next, click on the SQL/Errors tab page. This tab page lists all the SQL queries the transaction executes
on its backend database, and/or all the errors detected in the transaction's Java code. The query list
(see Figure 2.12) is typically arranged in the descending order of the query execution Duration, and
thus leads you to the long-running queries right away! You can even scrutinize the time-consuming
query on-the-fly, and suggest improvements to your administrator instantly.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
48
Figure 2.12: Queries displayed in the SQL/Error tab page
When displaying errors, the SQL/Error tab page does not display the error message alone, but displays
the complete code block that could have caused the error to occur. By carefully scrutinizing the block,
you can easily zero-in on the 'exact line of code' that could have forced the error - this means that
besides pointing you to bugs in your code, the SQL/Error tab page also helps you initiate measures to
fix the same.
Figure 2.13: Errors displayed in the SQL/Error tab page
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
49
This way, with the help of the three tab pages - At-A-Glance, Trace, and SQL/Error - you can effectively
analyze and accurately diagnose the root-cause of slowdowns in transactions to your Java
applications.
The detailed diagnosis of the Error transactions measure reveals the top-10 (by default) transactions,
in terms of TOTAL RESPONSE TIME, that have encountered errors. To know the nature of the errors that
occurred, click on the URL Tree icon in Figure 2.14. This will lead you to the URL Tree window, which has
already been elaborately discussed.
Figure 2.14: The detailed diagnosis of the Error transactions measure
2.1.2.6.1 JVM GC Test
Manual memory management is time consuming, and error prone. Most programs still contain leaks.
This is all doubly true with programs using exception-handling and/or threads. Garbage collection (GC)
is a part of WebLogic's JVM that automatically determines what memory a program is no longer using,
and recycles it for other use. It is also known as "automatic storage (or memory) reclamation''. The
JVMGCTest reports the performance statistics pertaining to the JVM's garbage collection.
Purpose
Reports the performance statistics pertaining to the JVM's garbage collection
Target of the
test
A WebLogic application server
Agent
deploying the
test
An internal agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
50
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST The IP address of the WebLogic server
3. PORT The port number of the WebLogic server
4. JREHOME The path to the directory in which the JVM to be monitored exists
5. LOGFILENAME The full path to the log file which stores the GC output.
6. DETAILED DIAGNOSIS - To make diagnosis more efficient and accurate, the eG
Enterprise suite embeds an optional detailed diagnostic capability. With this
capability, the eG agents can be configured to run detailed, more elaborate tests
as and when specific problems are detected. To enable the detailed diagnosis
capability of this test for a particular server, choose the On option. To disable
the capability, click on the Off option.
The option to selectively enable/disable the detailed diagnosis capability will be
available only if the following conditions are fulfilled:
The eG manager license should allow the detailed diagnosis capability
Both the normal and abnormal frequencies configured for the detailed
diagnosis measures should not be 0.
Outputs of the
test
One set of results for every GC
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Number of GC:
The number of times
garbage collection has
happened.
Number
If adequate memory is not allotted to
the JVM, then the value of this measure
would be very high. A high value of this
measure is indicative of a high frequency
of GC. This is not a good sign, as GC,
during its execution, has the tendency of
suspending an application, and a high
frequency of GC would only adversely
impact the application's performance. To
avoid this, it is recommended that you
allot sufficient memory to the JVM.
The detailed diagnosis of the Number of
GC measure, if enabled, provides details
such as the heap size before GC was
performed, the heap size after GC was
performed, and the total time spent by
the JVM in garbage collection. The
difference between the heap sizes will
help administrators figure out how
effective GC is. The total GC time will
help administrators gauge how severely
GC has impacted application
performance
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
51
Total GC time:
The sum of the time
taken by all garbage
collections.
Secs
If adequate memory is not allotted to
the JVM, then the value of this measure
would be very high. This is not a good
sign, as GC, during its execution, has
the tendency of suspending an
application, and a high value of this
measure would only adversely impact
the application's performance. To avoid
this, it is recommended that you allot
sufficient memory to the JVM.
Avg GC frequency:
The frequency with
which the JVM
performed GC.
Sec
If adequate memory is not allotted to
the JVM, then the value of this measure
would be very low. A low value of this
measure is indicative of a high frequency
of GC. This is not a good sign, as GC,
during its execution, has the tendency of
suspending an application, and a high
frequency of GC would only adversely
impact the application's performance. To
avoid this, it is recommended that you
allot sufficient memory to the JVM.
Avg GC pause:
The average time the
application is suspended
while garbage collection
is in progress.
Secs
If the garbage collection takes more
time to complete, then it indicates a
very high memory allocation to the JVM.
This again hinders application
performance.
Avg GC overhead:
The percentage of time
utilized by the JVM for
garbage collection
Percent
By carefully examining the application
behavior in terms of memory utilization,
you should arrive at an optimal ratio of
the number of times the GC needs to run
and how long it should take to complete.
Accordingly, memory allocation to the
JVM can be performed.
Max GC pause:
The maximum time
spent by the JVM on
garbage collection,
during the last
measurement period
Secs
Avg heap before GC:
The average heap size
prior to garbage
collection
KB
Avg heap after GC:
The average heap size
after garbage collection
KB
The difference between the value of this
measure and the Avg heap before GC
measure provides the amount of
memory that has been released by GC.
This value is a good indicator of the
effectiveness of GC.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
52
2.1.2.6.2 Java Classes Test
This test reports the number of classes loaded/unloaded from the memory.
Purpose
Reports the number of classes loaded/unloaded from the memory
Target of the
test
A WebLogic server
Agent
deploying the
test
An internal/remote agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
53
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST - The host for which the test is to be configured
3. PORT - The port number at which the specified HOST listens
4. MODE This test can extract metrics from the Java application using either of
the following mechanisms:
Using SNMP-based access to the Java runtime MIB statistics;
By contacting the Java runtime (JRE) of the application via JMX
To configure the test to use SNMP, select the SNMP option. On the other hand,
choose the JMX option to configure the test to use JMX instead. By default, the
JMX option is chosen here.
5. JMX REMOTE PORT This parameter appears only if the MODE is set to JMX.
Here, specify the port at which the JMX listens for requests from remote hosts.
Ensure that you specify the same port that you configured in the
management.properties file in the <JAVA_HOME>\jre\lib\management folder used by
the target application (refer to the Monitoring Java Applications).
6. USER, PASSWORD, and CONFIRM PASSWORD These parameters appear only if
the MODE is set to JMX. If JMX requires authentication only (but no security), then
ensure that the USER and PASSWORD parameters are configured with the
credentials of a user with read-write access to JMX. To know how to create this
user, refer to the Monitoring Java Applications document. Confirm the password
by retyping it in the CONFIRM PASSWORD text box.
7. JNDINAME This parameter appears only if the MODE is set to JMX. The JNDINAME
is a lookup name for connecting to the JMX connector. By default, this is
jmxrmi. If you have registered the JMX connector in the RMI registry using a
different lookup name, then you can change this default value to reflect the
same.
8. SNMPPORT This parameter appears only if the MODE is set to SNMP. Here
specify the port number through which the server exposes its SNMP MIB. Ensure
that you specify the same port you configured in the management.properties file
in the <JAVA_HOME>\jre\lib\management folder used by the target application (see
page 13).
9. SNMPVERSION This parameter appears only if the MODE is set to SNMP. The
default selection in the SNMPVERSION list is v1. However, for this test to work,
you have to select SNMP v2 or v3 from this list, depending upon which version of
SNMP is in use in the target environment.
10. SNMPCOMMUNITY This parameter appears only if the MODE is set to SNMP.
Here, specify the SNMP community name that the test uses to communicate
with the mail server. The default is public. This parameter is specific to SNMP v1
and v2 only. Therefore, if the SNMPVERSION chosen is v3, then this parameter
will not appear.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
54
11. USERNAME This parameter appears only when v3 is selected as the
SNMPVERSION. SNMP version 3 (SNMPv3) is an extensible SNMP Framework
which supplements the SNMPv2 Framework, by additionally supporting message
security, access control, and remote SNMP configuration capabilities. To extract
performance statistics from the MIB using the highly secure SNMP v3 protocol,
the eG agent has to be configured with the required access privileges in other
words, the eG agent should connect to the MIB using the credentials of a user
with access permissions to be MIB. Therefore, specify the name of such a user
against the USERNAME parameter.
12. AUTHTYPE This parameter too appears only if v3 is selected as the
SNMPVERSION. From the AUTHTYPE list box, choose the authentication
algorithm using which SNMP v3 converts the specified USERNAME and
PASSWORD into a 32-bit format to ensure security of SNMP transactions. You
can choose between the following options:
MD5 Message Digest Algorithm
SHA Secure Hash Algorithm
13. ENCRYPTFLAG This flag appears only when v3 is selected as the
SNMPVERSION. By default, the eG agent does not encrypt SNMP requests.
Accordingly, the ENCRYPTFLAG is set to NO by default. To ensure that SNMP
requests sent by the eG agent are encrypted, select the YES option.
14. ENCRYPTTYPE If the ENCRYPTFLAG is set to YES, then you will have to
mention the encryption type by selecting an option from the ENCRYPTTYPE list.
SNMP v3 supports the following encryption types:
DES Data Encryption Standard
AES Advanced Encryption Standard
15. ENCRYPTPASSWORD Specify the encryption password here.
16. CONFIRM PASSWORD Confirm the encryption password by retyping it here.
17. TIMEOUT - This parameter appears only if the MODE is set to SNMP. Here, specify
the duration (in seconds) within which the SNMP query executed by this test
should time out in the TIMEOUT text box. The default is 10 seconds.
Outputs of the
test
One set of results for the server being monitored
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Classes loaded:
Indicates the number of classes
currently loaded into memory.
Number
Classes are fundamental to the
design of Java programming
language. Typically, Java
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
55
Classes unloaded:
Indicates the number of classes
currently unloaded from
memory.
Number
applications install a variety of
class loaders (that is, classes that
implement java.lang.ClassLoader)
to allow different portions of the
container, and the applications
running on the container, to have
access to different repositories of
available classes and resources. A
consistent decrease in the
number of classes loaded and
unloaded could indicate a road-
block in the loading/unloading of
classes by the class loader. If left
unchecked, critical
resources/classes could be
rendered inaccessible to the
application, thereby severely
affecting its performance.
Total classes loaded:
Indicates the total number of
classes loaded into memory
since the JVM started, including
those subsequently unloaded.
Number
2.1.2.6.3 JVM Threads Test
This test reports the status of threads on the JVM, and also reveals resource-hungry threads, so that
threads that are unnecessarily consuming CPU resources can be killed.
Purpose
Reports the status of threads on the JVM, and also reveals resource-hungry threads,
so that threads that are unnecessarily consuming CPU resources can be killed
Target of the
test
A WebLogic server
Agent
deploying the
test
An internal/remote agent
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
56
Configurable
parameters for
the test
1. TEST PERIOD - How often should the test be executed
2. HOST - The host for which the test is to be configured
3. PORT - The port number at which the specified HOST listens
4. MODE This test can extract metrics from the Java application using either of
the following mechanisms:
Using SNMP-based access to the Java runtime MIB statistics;
By contacting the Java runtime (JRE) of the application via JMX
To configure the test to use SNMP, select the SNMP option. On the other hand,
choose the JMX option to configure the test to use JMX instead. By default, the
JMX option is chosen here.
5. JMX REMOTE PORT This parameter appears only if the MODE is set to JMX.
Here, specify the port at which the JMX listens for requests from remote hosts.
Ensure that you specify the same port that you configured in the
management.properties file in the <JAVA_HOME>\jre\lib\management folder used by
the target application (refer to the Monitoring Java Applications document).
6. USER, PASSWORD, and CONFIRM PASSWORD These parameters appear only if
the MODE is set to JMX. If JMX requires authentication only (but no security), then
ensure that the USER and PASSWORD parameters are configured with the
credentials of a user with read-write access to JMX. To know how to create this
user, refer to the Monitoring Java Applications document. Confirm the password
by retyping it in the CONFIRM PASSWORD text box.
7. JNDINAME This parameter appears only if the MODE is set to JMX. The JNDINAME
is a lookup name for connecting to the JMX connector. By default, this is
jmxrmi. If you have registered the JMX connector in the RMI registry using a
different lookup name, then you can change this default value to reflect the
same.
8. SNMPPORT This parameter appears only if the MODE is set to SNMP. Here
specify the port number through which the server exposes its SNMP MIB. Ensure
that you specify the same port you configured in the management.properties file
in the <JAVA_HOME>\jre\lib\management folder used by the target application (refer
to the Monitoring Java Applications document).
9. SNMPVERSION This parameter appears only if the MODE is set to SNMP. The
default selection in the SNMPVERSION list is v1. However, for this test to work,
you have to select SNMP v2 or v3 from this list, depending upon which version of
SNMP is in use in the target environment.
10. SNMPCOMMUNITY This parameter appears only if the MODE is set to SNMP.
Here, specify the SNMP community name that the test uses to communicate
with the mail server. The default is public. This parameter is specific to SNMP v1
and v2 only. Therefore, if the SNMPVERSION chosen is v3, then this parameter
will not appear.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
57
11. USERNAME This parameter appears only when v3 is selected as the
SNMPVERSION. SNMP version 3 (SNMPv3) is an extensible SNMP Framework
which supplements the SNMPv2 Framework, by additionally supporting message
security, access control, and remote SNMP configuration capabilities. To extract
performance statistics from the MIB using the highly secure SNMP v3 protocol,
the eG agent has to be configured with the required access privileges in other
words, the eG agent should connect to the MIB using the credentials of a user
with access permissions to be MIB. Therefore, specify the name of such a user
against the USERNAME parameter.
12. AUTHTYPE This parameter too appears only if v3 is selected as the
SNMPVERSION. From the AUTHTYPE list box, choose the authentication
algorithm using which SNMP v3 converts the specified USERNAME and
PASSWORD into a 32-bit format to ensure security of SNMP transactions. You
can choose between the following options:
MD5 Message Digest Algorithm
SHA Secure Hash Algorithm
13. ENCRYPTFLAG This flag appears only when v3 is selected as the
SNMPVERSION. By default, the eG agent does not encrypt SNMP requests.
Accordingly, the ENCRYPTFLAG is set to NO by default. To ensure that SNMP
requests sent by the eG agent are encrypted, select the YES option.
14. ENCRYPTTYPE If the ENCRYPTFLAG is set to YES, then you will have to
mention the encryption type by selecting an option from the ENCRYPTTYPE list.
SNMP v3 supports the following encryption types:
DES Data Encryption Standard
AES Advanced Encryption Standard
15. ENCRYPTPASSWORD Specify the encryption password here.
16. CONFIRM PASSWORD Confirm the encryption password by retyping it here.
17. TIMEOUT - This parameter appears only if the MODE is set to SNMP. Here, specify
the duration (in seconds) within which the SNMP query executed by this test
should time out in the TIMEOUT text box. The default is 10 seconds.
18. PCT LOW CPU UTIL THREADS This test reports the number of threads in the
JVM that are consuming low CPU. This thread count will include only those
threads for which the CPU usage is equal to or lesser than the value specified in
the PCT LOW CPU UTIL THREADS text box. The default value displayed here is
30.
19. PCT MEDIUM CPU UTIL THREADS - This test reports the number of threads in the
JVM that are consuming CPU to a medium extent. This thread count will include
only those threads for which the CPU usage is higher than the PCT LOW CPU
UTIL THREADS configuration and is lower than or equal to the value specified in
the PCT MEDIUM CPU UTIL THREADS text box. The default value displayed here
is 50.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
58
20. PCT HIGH CPU UTIL THREADS - This test reports the number of threads in the
JVM that are consuming high CPU. This thread count will include only those
threads for which the CPU usage is either greater than the PCT MEDIUM CPU
UTIL THREADS configuration, or is equal to or greater than the value specified in
the PCT HIGH CPU UTIL THREADS text box. The default value displayed here is
70.
21. DD FREQUENCY - Refers to the frequency with which detailed
diagnosis measures are to be generated for this test. The default is 1:1. This
indicates that, by default, detailed measures will be generated every time this
test runs, and also every time the test detects a problem. You can modify this
frequency, if you so desire. Also, if you intend to disable the detailed diagnosis
capability for this test, you can do so by specifying none against DD
FREQUENCY.
22. DETAILED DIAGNOSIS - To make diagnosis more efficient and accurate, the eG
Enterprise suite embeds an optional detailed diagnostic capability. With this
capability, the eG agents can be configured to run detailed, more elaborate tests
as and when specific problems are detected. To enable the detailed diagnosis
capability of this test for a particular server, choose the On option. To disable
the capability, click on the Off option.
The option to selectively enable/disable the detailed diagnosis capability will be
available only if the following conditions are fulfilled:
The eG manager license should allow the detailed diagnosis capability
Both the normal and abnormal frequencies configured for the detailed
diagnosis measures should not be 0.
Outputs of the
test
One set of results for the server being monitored
Measurements
made by the
test
Measurement
Measurement
Unit
Interpretation
Total threads:
Indicates the total number of
threads (including daemon and
non-daemon threads).
Number
Runnable threads:
Indicates the current number of
threads in a runnable state.
Number
The detailed diagnosis of this
measure, if enabled, provides the
name of the threads, the CPU
usage by the threads, the time
for which the thread was in a
blocked state, waiting state, etc.
Blocked threads:
Indicates the number of threads
that are currently in a blocked
state.
Number
If a thread is trying to take a lock
(to enter a synchronized block),
but the lock is already held by
another thread, then such a
thread is called a blocked thread.
The detailed diagnosis of this
measure, if enabled, provides in-
depth information related to the
blocked threads.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
59
Waiting threads:
Indicates the number of threads
that are currently in a waiting
state.
Number
A thread is said to be in a Waiting
state if the thread enters a
synchronized block, tries to take
a lock that is already held by
another thread, and hence, waits
till the other thread notifies that it
has released the lock.
Ideally, the value of this measure
should be low. A very high value
could be indicative of excessive
waiting activity on the JVM. You
can use the detailed diagnosis of
this measure, if enabled, to figure
out which threads are currently in
the waiting state.
While waiting, the Java
application program does no
productive work and its ability to
complete the task-at-hand is
degraded. A certain amount of
waiting may be acceptable for
Java application programs.
However, when the amount of
time spent waiting becomes
excessive or if the number of
times that waits occur exceeds a
reasonable amount, the Java
application program may not be
programmed correctly to take
advantage of the available
resources. When this happens,
the delay caused by the waiting
Java application programs
elongates the response time
experienced by an end user. An
enterprise may use Java
application programs to perform
various functions. Delays based
on abnormal degradation
consume employee time and may
be costly to corporations.
Timed waiting threads:
Indicates the number of threads
in a TIMED_WAITING state.
Number
When a thread is in the
TIMED_WAITING state, it implies
that the thread is waiting for
another thread to do something,
but will give up after a specified
time out period.
To view the details of threads in
the TIMED_WAITING state, use
the detailed diagnosis of this
measure, if enabled.
M o n i t o r i n g W e b L o g i c A p p l i c a t i o n S e r v e r s
60
Low CPU threads:
Indicates the number of threads
that are currently consuming
CPU lower than the value
configured in the PCT LOW CPU
UTIL THREADS text box.
Number
Medium CPU threads:
Indicates the number of threads
that are currently consuming
CPU that is higher than the value
configured in the PCT LOW CPU
UTIL THREADS text box and is
lower than or equal to the value
specified in the PCT MEDIUM CPU
UTIL THREADS text box.
Number
High CPU threads:
Indicates the number of threads
that are currently consuming
CPU that is either greater than
the percentage configured in the
PCT MEDIUM CPU UTIL THREADS
or lesser than or equal to the
value configured in the PCT HIGH
CPU UTIL THREADS text box.
Number
Ideally, the value of this measure
should be very low. A high value
is indicative of a resource
contention at the JVM. Under
such circumstances, you might
want to identify the resource-
hungry threads and kill them, so
that application performance is
not hampered. To know which
threads are consuming excessive
CPU, use the detailed diagnosis of
this measure.
`
Peak threads:
Indicates the highest number of
live threads since JVM started.
Number
Started threads:
Indicates the the total number of
threads started (including
daemon, non-daemon, and
terminated) since JVM started.
Number
<