up arrow

Serviceability in HotSpot

The HotSpot Virtual Machine contains several technologies that allow its operation to be observed by another Java process:
Note: HotSpot also includes the following mechanisms that will produce output on the standard output TTY. These mechanisms are not normally used by observability tools and won't be discussed further here.

For each of these technologies, there is code in the J2SE repository that uses it, and/or allows user code to use it; see Serviceability in the J2SE Repository

The following table contains links to more information about each of these technologies, shows where these technologies reside in the HotSpot repository, and contains links to information about the use of the technologies in the J2SE repository.

Technology
Source Location
Usage in the J2SE Repository
JVM TI- Java Virtual Machine Tools Interface hotspot/src/share/vm/prims/jvmtiGen.java
hotspot/src/share/vm/prims/jvmtiGen.java
hotspot/src/share/vm/prims/jvmti.xml
hotspot/src/share/vm/prims/jvmti*
build/.../generated/jvmtifiles/jvmtiEnter.cpp
build/.../generated/jvmtifiles/jvmtiEnterTrace.cpp
build/.../generated/jvmtifiles/jvmtiEnv.hpp
build/.../generated/jvmtifiles/jvmtiEnvRecommended.cpp
build/.../generated/jvmtifiles/jvmtiEnvStub.cpp
build/.../generated/jvmtifiles/jvmti.h
    (copied to j2se/src/share/javavm/export/jvmti.h)
J2SE Info
Bugs
Monitoring and Management hotspot/src/share/vm/services/ (most but not all)
J2SE Info
Bugs
Dynamic attach mechanism src/share/vm/services/attachListener.*
src/os/linux/vm/attachListener_linux.cpp
src/os/solaris/vm/attachListener_solaris.cpp
src/os/win32/vm/attachListener_win32.cpp
J2SE Info
Jvmstat Performance Counters src/share/vm/prims/perf.cpp
src/share/runtime/perfMemory.cpp
src/share/runtime/perfData.cpp
src/share/runtime/statSampler.cpp
src/share/vm/services/*Service.cpp
src/os/solaris/vm/perfMemory_solaris.cpp
src/os/linux/vm/perfMemory_linux.cpp
src/os/win32/vm/perfMemory_win32.cpp
J2SE Info
Bugs
Serviceability Agent hotspot/agent/
hotspot/src/share/vm/runtime/vmStructs.hpp
hotspot/src/share/vm/runtime/vmStructs.cpp
hotspot/cpu/cpu/vm/vmstructs_cpu.hpp
hotspot/os_cpu/os-cpu/vm/vmstructs_os-cpu.hpp
J2SE Info
Usenix Serviceability Agent paper
DTrace Support (Solaris only)
hotspot/src/os/solaris/dtrace/
hotspot/build/solaris/makefiles/dtrace.make
DTrace Probes in HotSpot
User Guide
pstack Support(Solaris only)
hotspot/src/os/solaris/dtrace/ User Guide

Build and Implementation Notes

HotSpot JVM TI

The base definition of JVM TI is contained in file jvmti.xml which is processed at HotSpot build time by hotspot/src/share/vm/prims/jvmtiGen.java and hotspot/src/share/vm/prims/jvmtiEnvFill.java to create the .cpp and .hpp files shown above in the build/.../ directory. These files are then compiled during the build. The resulting JVM TI implementation is included in libjvm.so/jvm.dll with the rest of HotSpot.

The HotSpot build process creates interface file jvmti.h which is used by JVM TI agents such as the JPDA back-end. jvmti.h is copied from the HotSpot build area and checked into the J2SE repository whenever changes are made to the interface. This file contains a JVM TI version number which is compiled into the back-end and is checked against the JVM TI version that is in HotSpot during back-end startup.

In addition to the files shown above, JVM TI has hooks in many other HotSpot files, mainly for detecting events that might need to be reported to JVM TI agents. You can see such usages by 'grep -i jvmti' in the other HotSpot files. For many debugging functions, JVM TI also needs hooks in the generated interpreter. Since the mere presence of these hooks can slow down applications, the interpreter is normally generated without these hooks. If debugging is to be done, then a -agentlib option must be used on the Java command line specifying the debugging agent that is to be run. This option will be detected early in HotSpot startup and will cause the agent intialization code to be run before the interpreter is generated. The agent's startup code will request the JVM TI debugging capabilities which will in turn cause the interpreter to be generated in debug mode.

We are investigating the possibility of allowing debugging agents to start dynamically after HotSpot is already running in JDK 7. See 4841257: Should be able to 'attach on demand' to debug

Here(.pdf) is a presentation about the JVM TI implementation.

up arrow

HotSpot Monitoring and Management

File src/share/vm/services/jmm.h defines a Sun private interface that is implemented in HotSpot and is used by the monitoring and management code in the J2SE repository. jmm.h is copied into the J2SE repository so that monitoring and management native methods can use it to call into HotSpot to extract information. jmm.h contains a version number that is used at runtime to verify interface compatibility between the Java code and the HotSpot that is being monitored.

See Monitoring and Management in the J2SE Repository for more information.
up arrow

HotSpot Dynamic Attach Mechanism

This is a Sun extension that allows a tool to 'attach' to another process running Java code and launch a JVM TI agent or a java.lang.instrument agent in that process. This also allows the system properties to be obtained from the target JVM.

The Sun implementation of this API also includes some HotSpot specific methods that allow additional information to be obtained from HotSpot:

Dynamic attach has an attach listener thread in the target JVM. This is a thread that is started when the first attach request occurs. On Linux and Solaris, the client creates a file named .attach_pid(pid) and sends a SIGQUIT to the target JVM process. The existence of this file causes the SIGQUIT handler in HotSpot to start the attach listener thread. On Windows, the client uses the Win32 CreateRemoteThread function to create a new thread in the target process. The attach listener thread then communicates with the source JVM in an OS dependent manner:

up arrow

HotSpot Jvmstat Performance Counters

The HotSpot JVM exports a set of instrumentation objects, or counters as they are typically called. The counters are always on and so are updated by HotSpot in such a way as to impose minimal overhead to the running application. The set of counters exported by a JVM is not static as a JVM may create certain counters only when appropriate arguments are specified on the command line. Furthermore, different versions of a JVM may export very different sets of instrumentation. The counters have structured names such as sun.gc.generation.1.name, java.threads.live, java.cls.loadedClasses. The names of these counters and the data structures used to represent them are considered private, uncommitted interfaces to the HotSpot JVM. Users should not become dependent on any counter names, particularly those that start with prefixes other than "java.".

These counters are exposed to observers in different processes by means of a shared memory file. This allows observers in other processes to poll the counters without imposing any overhead on the target process. The java.io.tmpdir system property contains the pathname of the directory in which this file resides. The Solaris and Linux shared memory implementations use the mmap interface with a backing store file to implement named shared memory. Using the file system as the name space for shared memory allows a common name space to be supported across a variety of platforms. It also provides a name space that Java applications can deal with through simple file APIs. The Solaris and Linux implementations store the backing store file in a user specific temporary directory located in the /tmp file system, which is always a local file system and is sometimes a RAM based file system. The name of the file is:

/tmp/hsperfdata_user-name/vm-id

The win32 shared memory implementation uses two objects to represent the shared memory: a windows kernel based file mapping object and a backing store file. On windows, the name space for shared memory is a kernel based name space that is disjoint from other win32 name spaces. Since Java is unaware of this name space, a parallel file system based name space is maintained, which provides a common file system based shared memory name space across the supported platforms and one that Java apps can deal with through simple file apis. For performance and resource cleanup reasons, it is recommended that the user specific directory and the backing store file be stored in either a RAM based file system or a local disk based file system. Network based file systems are not recommended for performance reasons. In addition, use of SMB network based file systems may result in unsuccessful cleanup of the disk based resource on exit of the JVM. The Windows TMP and TEMP environment variables, as used by the GetTempPath() Win32 API (see os::get_temp_directory() in os_win32.cpp), control the location of the user specific directory and the shared memory backing store file. This file must not be on a FAT filesystem.
up arrow

HotSpot Serviceability Agent

SA knows how to:

Note that SA runs in a separate process from the target process and executes no code in the target process. However, the target process is halted while SA observes it.

SA consists mostly of Java classes but it contains a small amount of native code to read raw bits from processes and core files.

File src/share/vm/runtime/vmStructs.cpp contains 'declarations' of each HotSpot class and its fields as well as declarations of processor dependent items such as registers, sizeof types, ... For the latter, vmStructs.cpp includes arch/cpu dependent files, eg:

As an example, in file src/share/vm/oops/cpCacheOop.hpp we have:
      :
      class constantPoolCacheOopDesc: public arrayOopDesc {
      friend class VMStructs;
      private:
      constantPoolOop _constant_pool;   // the corresponding constant pool
      :
    
In vmStructs.cpp, the _constant_pool field is 'declared' like this:
      nonstatic_field(constantPoolCacheOopDesc, _constant_pool,  constantPoolOop) \
    
Note the 'friend class VMStructs' declaration in the above class. Most classes declare VMStructs to be a friend so that private fields can be accessed.

During the HotSpot build, vmStructs.cpp is compiled into vmStructs.o which is included in libjvm.so. vmStructs.o contains all the data that SA needs to read the HotSpot data structures. At runtime, SA reads this data from the target process or core file.

The names in vmStructs.cpp are used by the Java code in SA. Thus, if a field named in vmStructs.cpp is deleted or renamed, both vmStructs.cpp and the Java code that access that field have to be modified. If this isn't done, then SA will fail when it tries to examine a process/core file.
The test in agent/jdi/sasanity.sh which runs this class: agent/jdi/SASanityChecker.java should be run to check this.

Lastly, the Java code in SA is basically a mirror of the C++ code in HotSpot. If algorithms are changed in HotSpot, the same changes might have to e made in the SA Java code. Because of the tight coupling between the Java classes in SA and the HotSpot data structures, we can only count on an instance of SA being able to analyze the HotSpot that was built from the same HotSpot repository state. In order to detect a mismatch, the HotSpot build places a sa.properties file into sa-jdi.jar. This file contains a version property, eg:

sun.jvm.hotspot.runtime.VM.supportedVersion=1.7.0
At run time, SA Java code reads this property and compares it to the version of the HotSpot to be analyzed and throws a VMVersionMismatchException if the versions do not match. This check can be disabled by running the SA tool with
-Dsun.jvm.hotspot.runtime.VM.disableVersionCheck

SA components are built as part of the standard build of the HotSpot repository:

These two files are copied from the HotSpot build area to the JDK build area during a control build (a control build is a build of the control repository which first builds HotSpot and then builds the J2SE repository, so that the files built by the HotSpot build are available to the J2SE build.)

SA includes other components that are just used for debugging HotSpot and are not built as part of the normal HotSpot build. These components are built by doing a make in the hotspot/agent/make directory. For more information, please see agent/doc/ for documentation on these tools, and hints for cross machine core dump debugging.

See also Usenix Serviceability Agent paper
up arrow

DTrace Support

(The files that support dtrace in HotSpot are in hotspot/src/os/solaris/dtrace/ )
HotSpot contains functionality that allows the DTrace jstack() action to show Java frames. In addition, HotSpot contains several built-in USDT probes that allow HotSpot actions to be directly accessed from D programs.

jstack() Support

HotSpot contains support for the dtrace jstack() action that allows Java stack frames to be shown. Here is the how this works.

USDT dtrace probes in HotSpot

A USDT dtrace probe in a HotSpot file is represented by a macro that calls a non-existent specially-named external function. The parameters that are passed to the function (through the macro) become arguments that the dtrace script client can access. For example, in hashtable.cpp, the new_entry method contains
        HS_DTRACE_PROBE4(hs_private, hashtable__new_entry,  this, hashValue, obj, entry);
    
hs_private is a dtrace provider. HotSpot has three providers:
hotspot, hotspot_jni, hs_private
These providers are defined in files hotspot.d, hotspot_jni.d, and hs_private.d. These files are combined (along with jhelper.d) into temp file dtrace.d which is compiled by a dtrace command into file dtrace.o. dtrace.o contains a special section (SUNW_dof) which contains a mapping of the probes to their location in the code. In addition to the dtrace.d file, the dtrace command is also given the .o files that contain the probes. dtrace generates new versions of these .o files in which the non-existent functions have been replaced by one or more 'nop' instructions, and the non-existant symbols are deleted have been deleted from the symbol table.

Finally, the modified .o files, and dtrace.o are linked into libjvm.so. When libjvm.so is loaded, the _init() method in the special SUNW_dof section registers the probes with dtrace in the kernel. When a dtrace script wants to trace a particular area, it interacts with the dtrace code in the kernel and causes a 'trap' instruction to replace the 'nop', and the kernel handles all the work to get the dtrace actions executed.

Because the probe points turn into nop instruction (except for the argument setup), the probes are relatively cost-free in the traced application when not actively probed. The argument setup can be somewhat costly at times which is why in hotspot the synchronization probes are protected by a command-line switch. Newer versions of dtrace have tricks for checking whether a probe is enabled to let you skip that argument setup, but because we have to compile on Solaris 8 and have a special backported version of dtrace, we don't have that functionality.

A problem is that currently, USDT probes cannot be placed in generated code. This makes tracing Java methods and object allocation tricky since that is done in generated code. To overcome this, there are a couple of stubs in static code in src/share/vm/runtime/sharedRuntime.cpp which contain the appropriate probes. When the

-XX:+ExtendedDTraceProbes flag
is passed on the command line, runtime control flow is redirected thru these stubs which slows down execution.
up arrow

pstack Support

pstack(1) is a Solaris utility that prints stack traces of all threads in a process. HotSpot contains support that allows pstack to find names of Java methods on a stack.

pstack does this by calling into libjvm_db.so to get the names of Java frames. libjvm_db.so is created from file libjvm_db.c which finds information in the HotSpot process by using the same JvmOffsets mechanism as does the dtrace jstack() provider.
up arrow


Last Modified: 06/29/07