Valgrind Manual

valgrind_manual

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 396 [warning: Documents this large are best viewed by clicking the View PDF Link!]

Valgrind Documentation
Release 3.14.0 9 October 2018
Copyright ©2000-2018 AUTHORS
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation
License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with
no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled The
GNU Free Documentation License.
This is the top level of Valgrind’s documentation tree. The documentation is contained in six logically separate
documents, as listed in the following Table of Contents. To get started quickly, read the Valgrind Quick Start Guide.
For full documentation on Valgrind, read the Valgrind User Manual.
Valgrind Documentation
Table of Contents
The Valgrind Quick Start Guide .....................................................................iii
Valgrind User Manual ..............................................................................iv
Valgrind FAQ ..................................................................................cxciv
Valgrind Technical Documentation ...................................................................x
Valgrind Distribution Documents ...................................................................xx
GNU Licenses ...................................................................................cxli
2
The Valgrind Quick Start Guide
Release 3.14.0 9 October 2018
Copyright ©2000-2018 Valgrind Developers
Email: valgrind@valgrind.org
The Valgrind Quick Start Guide
Table of Contents
The Valgrind Quick Start Guide ..................................................................... 1
1. Introduction .....................................................................................1
2. Preparing your program .......................................................................... 1
3. Running your program under Memcheck ...........................................................1
4. Interpreting Memcheck’s output ...................................................................1
5. Caveats .........................................................................................3
6. More information ................................................................................3
iv
The Valgrind Quick Start Guide
The Valgrind Quick Start Guide
1. Introduction
The Valgrind tool suite provides a number of debugging and profiling tools that help you make your programs faster
and more correct. The most popular of these tools is called Memcheck. It can detect many memory-related errors
that are common in C and C++ programs and that can lead to crashes and unpredictable behaviour.
The rest of this guide gives the minimum information you need to start detecting memory errors in your program with
Memcheck. For full documentation of Memcheck and the other tools, please read the User Manual.
2. Preparing your program
Compile your program with -g to include debugging information so that Memcheck’s error messages include exact
line numbers. Using -O0 is also a good idea, if you can tolerate the slowdown. With -O1 line numbers in
error messages can be inaccurate, although generally speaking running Memcheck on code compiled at -O1 works
fairly well, and the speed improvement compared to running -O0 is quite significant. Use of -O2 and above is not
recommended as Memcheck occasionally reports uninitialised-value errors which don’t really exist.
3. Running your program under Memcheck
If you normally run your program like this:
myprog arg1 arg2
Use this command line:
valgrind --leak-check=yes myprog arg1 arg2
Memcheck is the default tool. The --leak-check option turns on the detailed memory leak detector.
Your program will run much slower (eg. 20 to 30 times) than normal, and use a lot more memory. Memcheck will
issue messages about memory errors and leaks that it detects.
4. Interpreting Memcheck’s output
Here’s an example C program, in a file called a.c, with a memory error and a memory leak.
1
The Valgrind Quick Start Guide
#include <stdlib.h>
void f(void)
{
int*x = malloc(10 *sizeof(int));
x[10] = 0; // problem 1: heap block overrun
} // problem 2: memory leak -- x not freed
int main(void)
{
f();
return 0;
}
Most error messages look like the following, which describes problem 1, the heap block overrun:
==19182== Invalid write of size 4
==19182== at 0x804838F: f (example.c:6)
==19182== by 0x80483AB: main (example.c:11)
==19182== Address 0x1BA45050 is 0 bytes after a block of size 40 alloc’d
==19182== at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)
==19182== by 0x8048385: f (example.c:5)
==19182== by 0x80483AB: main (example.c:11)
Things to notice:
There is a lot of information in each error message; read it carefully.
The 19182 is the process ID; it’s usually unimportant.
The first line ("Invalid write...") tells you what kind of error it is. Here, the program wrote to some memory it
should not have due to a heap block overrun.
Below the first line is a stack trace telling you where the problem occurred. Stack traces can get quite large, and be
confusing, especially if you are using the C++ STL. Reading them from the bottom up can help. If the stack trace
is not big enough, use the --num-callers option to make it bigger.
The code addresses (eg. 0x804838F) are usually unimportant, but occasionally crucial for tracking down weirder
bugs.
Some error messages have a second component which describes the memory address involved. This one shows
that the written memory is just past the end of a block allocated with malloc() on line 5 of example.c.
2
The Valgrind Quick Start Guide
It’s worth fixing errors in the order they are reported, as later errors can be caused by earlier errors. Failing to do this
is a common cause of difficulty with Memcheck.
Memory leak messages look like this:
==19182== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1
==19182== at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)
==19182== by 0x8048385: f (a.c:5)
==19182== by 0x80483AB: main (a.c:11)
The stack trace tells you where the leaked memory was allocated. Memcheck cannot tell you why the memory leaked,
unfortunately. (Ignore the "vg_replace_malloc.c", that’s an implementation detail.)
There are several kinds of leaks; the two most important categories are:
"definitely lost": your program is leaking memory -- fix it!
"probably lost": your program is leaking memory, unless you’re doing funny things with pointers (such as moving
them to point to the middle of a heap block).
Memcheck also reports uses of uninitialised values, most commonly with the message "Conditional jump or move
depends on uninitialised value(s)". It can be difficult to determine the root cause of these errors. Try using the
--track-origins=yes to get extra information. This makes Memcheck run slower, but the extra information
you get often saves a lot of time figuring out where the uninitialised values are coming from.
If you don’t understand an error message, please consult Explanation of error messages from Memcheck in the
Valgrind User Manual which has examples of all the error messages Memcheck produces.
5. Caveats
Memcheck is not perfect; it occasionally produces false positives, and there are mechanisms for suppressing these
(see Suppressing errors in the Valgrind User Manual). However, it is typically right 99% of the time, so you should be
wary of ignoring its error messages. After all, you wouldn’t ignore warning messages produced by a compiler, right?
The suppression mechanism is also useful if Memcheck is reporting errors in library code that you cannot change.
The default suppression set hides a lot of these, but you may come across more.
Memcheck cannot detect every memory error your program has. For example, it can’t detect out-of-range reads or
writes to arrays that are allocated statically or on the stack. But it should detect many errors that could crash your
program (eg. cause a segmentation fault).
Try to make your program so clean that Memcheck reports no errors. Once you achieve this state, it is much easier to
see when changes to the program cause Memcheck to report new errors. Experience from several years of Memcheck
use shows that it is possible to make even huge programs run Memcheck-clean. For example, large parts of KDE,
OpenOffice.org and Firefox are Memcheck-clean, or very close to it.
6. More information
Please consult the Valgrind FAQ and the Valgrind User Manual, which have much more information. Note that the
other tools in the Valgrind distribution can be invoked with the --tool option.
3
Valgrind User Manual
Release 3.14.0 9 October 2018
Copyright ©2000-2018 Valgrind Developers
Email: valgrind@valgrind.org
Valgrind User Manual
Table of Contents
1. Introduction .....................................................................................1
1.1. An Overview of Valgrind ....................................................................... 1
1.2. How to navigate this manual .................................................................... 1
2. Using and understanding the Valgrind core ......................................................... 3
2.1. What Valgrind does with your program ...........................................................3
2.2. Getting started .................................................................................4
2.3. The Commentary .............................................................................. 4
2.4. Reporting of errors .............................................................................6
2.5. Suppressing errors ............................................................................. 7
2.6. Core Command-line Options ....................................................................9
2.6.1. Tool-selection Option ........................................................................10
2.6.2. Basic Options ...............................................................................10
2.6.3. Error-related Options ........................................................................13
2.6.4. malloc-related Options .......................................................................21
2.6.5. Uncommon Options .........................................................................22
2.6.6. Debugging Options ..........................................................................31
2.6.7. Setting Default Options ......................................................................31
2.7. Support for Threads ...........................................................................32
2.7.1. Scheduling and Multi-Thread Performance .....................................................32
2.8. Handling of Signals ...........................................................................33
2.9. Execution Trees .............................................................................. 33
2.10. Building and Installing Valgrind ...............................................................38
2.11. If You Have Problems ........................................................................39
2.12. Limitations ................................................................................. 39
2.13. An Example Run ............................................................................ 42
2.14. Warning Messages You Might See .............................................................43
3. Using and understanding the Valgrind core: Advanced Topics ....................................... 44
3.1. The Client Request mechanism .................................................................44
3.2. Debugging your program using Valgrind gdbserver and GDB ......................................47
3.2.1. Quick Start: debugging in 3 steps ............................................................. 47
3.2.2. Valgrind gdbserver overall organisation ........................................................47
3.2.3. Connecting GDB to a Valgrind gdbserver ......................................................48
3.2.4. Connecting to an Android gdbserver .......................................................... 50
3.2.5. Monitor command handling by the Valgrind gdbserver .......................................... 50
3.2.6. Valgrind gdbserver thread information .........................................................52
3.2.7. Examining and modifying Valgrind shadow registers ............................................52
3.2.8. Limitations of the Valgrind gdbserver ......................................................... 53
3.2.9. vgdb command line options .................................................................. 57
3.2.10. Valgrind monitor commands ................................................................ 59
3.3. Function wrapping ............................................................................62
3.3.1. A Simple Example .......................................................................... 62
3.3.2. Wrapping Specifications ..................................................................... 63
3.3.3. Wrapping Semantics ........................................................................ 64
3.3.4. Debugging ................................................................................. 65
3.3.5. Limitations - control flow .................................................................... 65
3.3.6. Limitations - original function signatures ...................................................... 66
3.3.7. Examples .................................................................................. 66
4. Memcheck: a memory error detector ............................................................. 67
4.1. Overview .................................................................................... 67
4.2. Explanation of error messages from Memcheck .................................................. 67
4.2.1. Illegal read / Illegal write errors ...............................................................67
4.2.2. Use of uninitialised values ................................................................... 68
v
Valgrind User Manual
4.2.3. Use of uninitialised or unaddressable values in system calls ......................................69
4.2.4. Illegal frees .................................................................................69
4.2.5. When a heap block is freed with an inappropriate deallocation function ........................... 70
4.2.6. Overlapping source and destination blocks .....................................................71
4.2.7. Fishy argument values .......................................................................71
4.2.8. Memory leak detection ...................................................................... 72
4.3. Memcheck Command-Line Options ............................................................ 76
4.4. Writing suppression files ...................................................................... 81
4.5. Details of Memcheck’s checking machinery ..................................................... 82
4.5.1. Valid-value (V) bits ......................................................................... 82
4.5.2. Valid-address (A) bits ....................................................................... 84
4.5.3. Putting it all together ........................................................................ 84
4.6. Memcheck Monitor Commands ................................................................ 85
4.7. Client Requests ...............................................................................91
4.8. Memory Pools: describing and working with custom allocators .................................... 92
4.9. Debugging MPI Parallel Programs with Valgrind .................................................95
4.9.1. Building and installing the wrappers .......................................................... 95
4.9.2. Getting started ..............................................................................96
4.9.3. Controlling the wrapper library ............................................................... 96
4.9.4. Functions .................................................................................. 97
4.9.5. Types ......................................................................................98
4.9.6. Writing new wrappers ....................................................................... 98
4.9.7. What to expect when using the wrappers .......................................................98
5. Cachegrind: a cache and branch-prediction profiler ................................................100
5.1. Overview ...................................................................................100
5.2. Using Cachegrind, cg_annotate and cg_merge .................................................. 100
5.2.1. Running Cachegrind ....................................................................... 101
5.2.2. Output File ................................................................................101
5.2.3. Running cg_annotate .......................................................................102
5.2.4. The Output Preamble .......................................................................102
5.2.5. The Global and Function-level Counts ........................................................103
5.2.6. Line-by-line Counts ........................................................................104
5.2.7. Annotating Assembly Code Programs ........................................................106
5.2.8. Forking Programs ..........................................................................174
5.2.9. cg_annotate Warnings ...................................................................... 106
5.2.10. Unusual Annotation Cases .................................................................107
5.2.11. Merging Profiles with cg_merge ............................................................108
5.2.12. Differencing Profiles with cg_diff ...........................................................108
5.3. Cachegrind Command-line Options ............................................................109
5.4. cg_annotate Command-line Options ........................................................... 110
5.5. cg_merge Command-line Options ............................................................. 111
5.6. cg_diff Command-line Options ................................................................111
5.7. Acting on Cachegrind’s Information ........................................................... 112
5.8. Simulation Details ...........................................................................113
5.8.1. Cache Simulation Specifics ................................................................. 113
5.8.2. Branch Simulation Specifics ................................................................ 114
5.8.3. Accuracy ..................................................................................114
5.9. Implementation Details .......................................................................115
5.9.1. How Cachegrind Works .................................................................... 115
5.9.2. Cachegrind Output File Format ..............................................................115
6. Callgrind: a call-graph generating cache and branch prediction profiler ..............................117
6.1. Overview ...................................................................................117
6.1.1. Functionality .............................................................................. 117
6.1.2. Basic Usage ...............................................................................118
vi
Valgrind User Manual
6.2. Advanced Usage .............................................................................119
6.2.1. Multiple profiling dumps from one program run ...............................................119
6.2.2. Limiting the range of collected events ........................................................120
6.2.3. Counting global bus events ..................................................................121
6.2.4. Avoiding cycles ............................................................................121
6.2.5. Forking Programs ..........................................................................122
6.3. Callgrind Command-line Options ..............................................................122
6.3.1. Dump creation options ..................................................................... 123
6.3.2. Activity options ............................................................................123
6.3.3. Data collection options ..................................................................... 124
6.3.4. Cost entity separation options ............................................................... 125
6.3.5. Simulation options ......................................................................... 126
6.3.6. Cache simulation options ................................................................... 126
6.4. Callgrind Monitor Commands .................................................................127
6.5. Callgrind specific client requests .............................................................. 127
6.6. callgrind_annotate Command-line Options ..................................................... 128
6.7. callgrind_control Command-line Options .......................................................129
7. Helgrind: a thread error detector ................................................................ 131
7.1. Overview ...................................................................................131
7.2. Detected errors: Misuses of the POSIX pthreads API ............................................ 131
7.3. Detected errors: Inconsistent Lock Orderings ................................................... 132
7.4. Detected errors: Data Races .................................................................. 134
7.4.1. A Simple Data Race ........................................................................134
7.4.2. Helgrind’s Race Detection Algorithm ........................................................ 136
7.4.3. Interpreting Race Error Messages ............................................................139
7.5. Hints and Tips for Effective Use of Helgrind ....................................................140
7.6. Helgrind Command-line Options .............................................................. 144
7.7. Helgrind Monitor Commands ................................................................. 146
7.8. Helgrind Client Requests ..................................................................... 148
7.9. A To-Do List for Helgrind ....................................................................148
8. DRD: a thread error detector ....................................................................149
8.1. Overview ...................................................................................149
8.1.1. Multithreaded Programming Paradigms ...................................................... 149
8.1.2. POSIX Threads Programming Model ........................................................ 149
8.1.3. Multithreaded Programming Problems ....................................................... 150
8.1.4. Data Race Detection ....................................................................... 150
8.2. Using DRD ................................................................................. 151
8.2.1. DRD Command-line Options ................................................................151
8.2.2. Detected Errors: Data Races ................................................................ 154
8.2.3. Detected Errors: Lock Contention ........................................................... 155
8.2.4. Detected Errors: Misuse of the POSIX threads API ............................................ 156
8.2.5. Client Requests ............................................................................157
8.2.6. Debugging C++11 Programs ................................................................ 159
8.2.7. Debugging GNOME Programs .............................................................. 160
8.2.8. Debugging Boost.Thread Programs .......................................................... 160
8.2.9. Debugging OpenMP Programs .............................................................. 160
8.2.10. DRD and Custom Memory Allocators .......................................................161
8.2.11. DRD Versus Memcheck ................................................................... 162
8.2.12. Resource Requirements ....................................................................162
8.2.13. Hints and Tips for Effective Use of DRD .................................................... 162
8.3. Using the POSIX Threads API Effectively ......................................................163
8.3.1. Mutex types ...............................................................................163
8.3.2. Condition variables ........................................................................ 163
8.3.3. pthread_cond_timedwait and timeouts ........................................................163
vii
Valgrind User Manual
8.4. Limitations ................................................................................. 164
8.5. Feedback ................................................................................... 164
9. Massif: a heap profiler ......................................................................... 165
9.1. Overview ...................................................................................193
9.2. Using Massif and ms_print ................................................................... 165
9.2.1. An Example Program ...................................................................... 165
9.2.2. Running Massif ............................................................................166
9.2.3. Running ms_print ..........................................................................166
9.2.4. The Output Preamble .......................................................................167
9.2.5. The Output Graph ..........................................................................167
9.2.6. The Snapshot Details .......................................................................170
9.2.7. Forking Programs ..........................................................................174
9.2.8. Measuring All Memory in a Process ......................................................... 174
9.2.9. Acting on Massifs Information ..............................................................174
9.3. Massif Command-line Options ................................................................175
9.4. Massif Monitor Commands ...................................................................177
9.5. Massif Client Requests .......................................................................177
9.6. ms_print Command-line Options .............................................................. 177
9.7. Massifs Output File Format .................................................................. 178
10. DHAT: a dynamic heap analysis tool ........................................................... 179
10.1. Overview ..................................................................................179
10.2. Understanding DHAT’s output ...............................................................180
10.2.1. Interpreting the max-live, tot-alloc and deaths fields .......................................... 180
10.2.2. Interpreting the acc-ratios fields ............................................................ 181
10.2.3. Interpreting "Aggregated access counts by offset" data ........................................182
10.3. DHAT Command-line Options ...............................................................183
11. SGCheck: an experimental stack and global array overrun detector ................................ 185
11.1. Overview ..................................................................................185
11.2. SGCheck Command-line Options ............................................................ 185
11.3. How SGCheck Works .......................................................................185
11.4. Comparison with Memcheck .................................................................186
11.5. Limitations ................................................................................ 186
11.6. Still To Do: User-visible Functionality ........................................................187
11.7. Still To Do: Implementation Tidying ..........................................................187
12. BBV: an experimental basic block vector generation tool ......................................... 188
12.1. Overview ..................................................................................188
12.2. Using Basic Block Vectors to create SimPoints ................................................ 188
12.3. BBV Command-line Options ................................................................ 189
12.4. Basic Block Vector File Format .............................................................. 189
12.5. Implementation ............................................................................ 190
12.6. Threaded Executable Support ................................................................190
12.7. Validation ................................................................................. 190
12.8. Performance ............................................................................... 191
13. Lackey: an example tool ...................................................................... 192
13.1. Overview ..................................................................................192
13.2. Lackey Command-line Options .............................................................. 192
14. Nulgrind: the minimal Valgrind tool ............................................................193
14.1. Overview ..................................................................................193
viii
1. Introduction
1.1. An Overview of Valgrind
Valgrind is an instrumentation framework for building dynamic analysis tools. It comes with a set of tools each of
which performs some kind of debugging, profiling, or similar task that helps you improve your programs. Valgrind’s
architecture is modular, so new tools can be created easily and without disturbing the existing structure.
A number of useful tools are supplied as standard.
1. Memcheck is a memory error detector. It helps you make your programs, particularly those written in C and C++,
more correct.
2. Cachegrind is a cache and branch-prediction profiler. It helps you make your programs run faster.
3. Callgrind is a call-graph generating cache profiler. It has some overlap with Cachegrind, but also gathers some
information that Cachegrind does not.
4. Helgrind is a thread error detector. It helps you make your multi-threaded programs more correct.
5. DRD is also a thread error detector. It is similar to Helgrind but uses different analysis techniques and so may find
different problems.
6. Massif is a heap profiler. It helps you make your programs use less memory.
7. DHAT is a different kind of heap profiler. It helps you understand issues of block lifetimes, block utilisation, and
layout inefficiencies.
8. SGcheck is an experimental tool that can detect overruns of stack and global arrays. Its functionality is
complementary to that of Memcheck: SGcheck finds problems that Memcheck can’t, and vice versa..
9. BBV is an experimental SimPoint basic block vector generator. It is useful to people doing computer architecture
research and development.
There are also a couple of minor tools that aren’t useful to most users: Lackey is an example tool that illustrates
some instrumentation basics; and Nulgrind is the minimal Valgrind tool that does no analysis or instrumentation, and
is only useful for testing purposes.
Valgrind is closely tied to details of the CPU and operating system, and to a lesser extent, the compiler and basic C
libraries. Nonetheless, it supports a number of widely-used platforms, listed in full at http://www.valgrind.org/.
Valgrind is built via the standard Unix ./configure,make,make install process; full details are given in
the README file in the distribution.
Valgrind is licensed under the The GNU General Public License, version 2. The valgrind/*.h headers that
you may wish to include in your code (eg. valgrind.h,memcheck.h,helgrind.h, etc.) are distributed under
a BSD-style license, so you may include them in your code without worrying about license conflicts. Some of
the PThreads test cases, pth_*.c, are taken from "Pthreads Programming" by Bradford Nichols, Dick Buttlar &
Jacqueline Proulx Farrell, ISBN 1-56592-115-1, published by O’Reilly & Associates, Inc.
If you contribute code to Valgrind, please ensure your contributions are licensed as "GPLv2, or (at your option) any
later version." This is so as to allow the possibility of easily upgrading the license to GPLv3 in future. If you want to
modify code in the VEX subdirectory, please also see the file VEX/HACKING.README in the distribution.
1
Introduction
1.2. How to navigate this manual
This manual’s structure reflects the structure of Valgrind itself. First, we describe the Valgrind core, how to use it, and
the options it supports. Then, each tool has its own chapter in this manual. You only need to read the documentation
for the core and for the tool(s) you actually use, although you may find it helpful to be at least a little bit familiar with
what all tools do. If you’re new to all this, you probably want to run the Memcheck tool and you might find the The
Valgrind Quick Start Guide useful.
Be aware that the core understands some command line options, and the tools have their own options which they know
about. This means there is no central place describing all the options that are accepted -- you have to read the options
documentation both for Valgrind’s core and for the tool you want to use.
2
2. Using and understanding the
Valgrind core
This chapter describes the Valgrind core services, command-line options and behaviours. That means it is relevant
regardless of what particular tool you are using. The information should be sufficient for you to make effective
day-to-day use of Valgrind. Advanced topics related to the Valgrind core are described in Valgrind’s core: advanced
topics.
A point of terminology: most references to "Valgrind" in this chapter refer to the Valgrind core services.
2.1. What Valgrind does with your program
Valgrind is designed to be as non-intrusive as possible. It works directly with existing executables. You don’t need to
recompile, relink, or otherwise modify the program to be checked.
You invoke Valgrind like this:
valgrind [valgrind-options] your-prog [your-prog-options]
The most important option is --tool which dictates which Valgrind tool to run. For example, if want to run the
command ls -l using the memory-checking tool Memcheck, issue this command:
valgrind --tool=memcheck ls -l
However, Memcheck is the default, so if you want to use it you can omit the --tool option.
Regardless of which tool is in use, Valgrind takes control of your program before it starts. Debugging information is
read from the executable and associated libraries, so that error messages and other outputs can be phrased in terms of
source code locations, when appropriate.
Your program is then run on a synthetic CPU provided by the Valgrind core. As new code is executed for the first
time, the core hands the code to the selected tool. The tool adds its own instrumentation code to this and hands the
result back to the core, which coordinates the continued execution of this instrumented code.
The amount of instrumentation code added varies widely between tools. At one end of the scale, Memcheck adds
code to check every memory access and every value computed, making it run 10-50 times slower than natively. At the
other end of the spectrum, the minimal tool, called Nulgrind, adds no instrumentation at all and causes in total "only"
about a 4 times slowdown.
Valgrind simulates every single instruction your program executes. Because of this, the active tool checks, or profiles,
not only the code in your application but also in all supporting dynamically-linked libraries, including the C library,
graphical libraries, and so on.
If you’re using an error-detection tool, Valgrind may detect errors in system libraries, for example the GNU C or X11
libraries, which you have to use. You might not be interested in these errors, since you probably have no control
over that code. Therefore, Valgrind allows you to selectively suppress errors, by recording them in a suppressions
file which is read when Valgrind starts up. The build mechanism selects default suppressions which give reasonable
behaviour for the OS and libraries detected on your machine. To make it easier to write suppressions, you can use the
--gen-suppressions=yes option. This tells Valgrind to print out a suppression for each reported error, which
you can then copy into a suppressions file.
3
Using and understanding the Valgrind core
Different error-checking tools report different kinds of errors. The suppression mechanism therefore allows you to say
which tool or tool(s) each suppression applies to.
2.2. Getting started
First off, consider whether it might be beneficial to recompile your application and supporting libraries with debugging
info enabled (the -g option). Without debugging info, the best Valgrind tools will be able to do is guess which function
a particular piece of code belongs to, which makes both error messages and profiling output nearly useless. With -g,
you’ll get messages which point directly to the relevant source code lines.
Another option you might like to consider, if you are working with C++, is -fno-inline. That makes it easier to
see the function-call chain, which can help reduce confusion when navigating around large C++ apps. For example,
debugging OpenOffice.org with Memcheck is a bit easier when using this option. You don’t have to do this, but doing
so helps Valgrind produce more accurate and less confusing error reports. Chances are you’re set up like this already,
if you intended to debug your program with GNU GDB, or some other debugger. Alternatively, the Valgrind option
--read-inline-info=yes instructs Valgrind to read the debug information describing inlining information.
With this, function call chain will be properly shown, even when your application is compiled with inlining.
If you are planning to use Memcheck: On rare occasions, compiler optimisations (at -O2 and above, and sometimes
-O1) have been observed to generate code which fools Memcheck into wrongly reporting uninitialised value errors,
or missing uninitialised value errors. We have looked in detail into fixing this, and unfortunately the result is that
doing so would give a further significant slowdown in what is already a slow tool. So the best solution is to turn off
optimisation altogether. Since this often makes things unmanageably slow, a reasonable compromise is to use -O.
This gets you the majority of the benefits of higher optimisation levels whilst keeping relatively small the chances of
false positives or false negatives from Memcheck. Also, you should compile your code with -Wall because it can
identify some or all of the problems that Valgrind can miss at the higher optimisation levels. (Using -Wall is also a
good idea in general.) All other tools (as far as we know) are unaffected by optimisation level, and for profiling tools
like Cachegrind it is better to compile your program at its normal optimisation level.
Valgrind understands the DWARF2/3/4 formats used by GCC 3.1 and later. The reader for "stabs" debugging format
(used by GCC versions prior to 3.1) has been disabled in Valgrind 3.9.0.
When you’re ready to roll, run Valgrind as described above. Note that you should run the real (machine-code)
executable here. If your application is started by, for example, a shell or Perl script, you’ll need to modify it to
invoke Valgrind on the real executables. Running such scripts directly under Valgrind will result in you getting error
reports pertaining to /bin/sh,/usr/bin/perl, or whatever interpreter you’re using. This may not be what you
want and can be confusing. You can force the issue by giving the option --trace-children=yes, but confusion
is still likely.
2.3. The Commentary
Valgrind tools write a commentary, a stream of text, detailing error reports and other significant events. All lines in
the commentary have following form:
==12345== some-message-from-Valgrind
The 12345 is the process ID. This scheme makes it easy to distinguish program output from Valgrind commentary,
and also easy to differentiate commentaries from different processes which have become merged together, for whatever
reason.
4
Using and understanding the Valgrind core
By default, Valgrind tools write only essential messages to the commentary, so as to avoid flooding you with
information of secondary importance. If you want more information about what is happening, re-run, passing the -v
option to Valgrind. A second -v gives yet more detail.
You can direct the commentary to three different places:
1. The default: send it to a file descriptor, which is by default 2 (stderr). So, if you give the core no options, it will
write commentary to the standard error stream. If you want to send it to some other file descriptor, for example
number 9, you can specify --log-fd=9.
This is the simplest and most common arrangement, but can cause problems when Valgrinding entire trees of
processes which expect specific file descriptors, particularly stdin/stdout/stderr, to be available for their own use.
2. A less intrusive option is to write the commentary to a file, which you specify by --log-file=filename.
There are special format specifiers that can be used to use a process ID or an environment variable name in the log
file name. These are useful/necessary if your program invokes multiple processes (especially for MPI programs).
See the basic options section for more details.
3. The least intrusive option is to send the commentary to a network socket. The socket is specified as an IP address
and port number pair, like this: --log-socket=192.168.0.1:12345 if you want to send the output to host
IP 192.168.0.1 port 12345 (note: we have no idea if 12345 is a port of pre-existing significance). You can also omit
the port number: --log-socket=192.168.0.1, in which case a default port of 1500 is used. This default is
defined by the constant VG_CLO_DEFAULT_LOGPORT in the sources.
Note, unfortunately, that you have to use an IP address here, rather than a hostname.
Writing to a network socket is pointless if you don’t have something listening at the other end. We provide a simple
listener program, valgrind-listener, which accepts connections on the specified port and copies whatever
it is sent to stdout. Probably someone will tell us this is a horrible security risk. It seems likely that people will
write more sophisticated listeners in the fullness of time.
valgrind-listener can accept simultaneous connections from up to 50 Valgrinded processes. In front of
each line of output it prints the current number of active connections in round brackets.
valgrind-listener accepts three command-line options:
-e --exit-at-zero
When the number of connected processes falls back to zero, exit. Without this, it will run forever, that is, until you
send it Control-C.
--max-connect=INTEGER
By default, the listener can connect to up to 50 processes. Occasionally, that number is too small. Use this option
to provide a different limit. E.g. --max-connect=100.
5
Using and understanding the Valgrind core
portnumber
Changes the port it listens on from the default (1500). The specified port must be in the range 1024 to 65535. The
same restriction applies to port numbers specified by a --log-socket to Valgrind itself.
If a Valgrinded process fails to connect to a listener, for whatever reason (the listener isn’t running, invalid or
unreachable host or port, etc), Valgrind switches back to writing the commentary to stderr. The same goes for
any process which loses an established connection to a listener. In other words, killing the listener doesn’t kill the
processes sending data to it.
Here is an important point about the relationship between the commentary and profiling output from tools. The
commentary contains a mix of messages from the Valgrind core and the selected tool. If the tool reports errors, it will
report them to the commentary. However, if the tool does profiling, the profile data will be written to a file of some
kind, depending on the tool, and independent of what --log-*options are in force. The commentary is intended
to be a low-bandwidth, human-readable channel. Profiling data, on the other hand, is usually voluminous and not
meaningful without further processing, which is why we have chosen this arrangement.
2.4. Reporting of errors
When an error-checking tool detects something bad happening in the program, an error message is written to the
commentary. Here’s an example from Memcheck:
==25832== Invalid read of size 4
==25832== at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45)
==25832== by 0x80487AF: main (bogon.cpp:66)
==25832== Address 0xBFFFF74C is not stack’d, malloc’d or free’d
This message says that the program did an illegal 4-byte read of address 0xBFFFF74C, which, as far as Memcheck
can tell, is not a valid stack address, nor corresponds to any current heap blocks or recently freed heap blocks. The
read is happening at line 45 of bogon.cpp, called from line 66 of the same file, etc. For errors associated with
an identified (current or freed) heap block, for example reading freed memory, Valgrind reports not only the location
where the error happened, but also where the associated heap block was allocated/freed.
Valgrind remembers all error reports. When an error is detected, it is compared against old reports, to see if it is a
duplicate. If so, the error is noted, but no further commentary is emitted. This avoids you being swamped with
bazillions of duplicate error reports.
If you want to know how many times each error occurred, run with the -v option. When execution finishes, all the
reports are printed out, along with, and sorted by, their occurrence counts. This makes it easy to see which errors have
occurred most frequently.
Errors are reported before the associated operation actually happens. For example, if you’re using Memcheck and
your program attempts to read from address zero, Memcheck will emit a message to this effect, and your program will
then likely die with a segmentation fault.
In general, you should try and fix errors in the order that they are reported. Not doing so can be confusing. For
example, a program which copies uninitialised values to several memory locations, and later uses them, will generate
several error messages, when run on Memcheck. The first such error message may well give the most direct clue to
the root cause of the problem.
The process of detecting duplicate errors is quite an expensive one and can become a significant performance overhead
if your program generates huge quantities of errors. To avoid serious problems, Valgrind will simply stop collecting
errors after 1,000 different errors have been seen, or 10,000,000 errors in total have been seen. In this situation you
might as well stop your program and fix it, because Valgrind won’t tell you anything else useful after this. Note that
6
Using and understanding the Valgrind core
the 1,000/10,000,000 limits apply after suppressed errors are removed. These limits are defined in m_errormgr.c
and can be increased if necessary.
To avoid this cutoff you can use the --error-limit=no option. Then Valgrind will always show errors, regardless
of how many there are. Use this option carefully, since it may have a bad effect on performance.
2.5. Suppressing errors
The error-checking tools detect numerous problems in the system libraries, such as the C library, which come pre-
installed with your OS. You can’t easily fix these, but you don’t want to see these errors (and yes, there are many!)
So Valgrind reads a list of errors to suppress at startup. A default suppression file is created by the ./configure
script when the system is built.
You can modify and add to the suppressions file at your leisure, or, better, write your own. Multiple suppression files
are allowed. This is useful if part of your project contains errors you can’t or don’t want to fix, yet you don’t want to
continuously be reminded of them.
Note: By far the easiest way to add suppressions is to use the --gen-suppressions=yes option described in
Core Command-line Options. This generates suppressions automatically. For best results, though, you may want to
edit the output of --gen-suppressions=yes by hand, in which case it would be advisable to read through this
section.
Each error to be suppressed is described very specifically, to minimise the possibility that a suppression-directive
inadvertently suppresses a bunch of similar errors which you did want to see. The suppression mechanism is designed
to allow precise yet flexible specification of errors to suppress.
If you use the -v option, at the end of execution, Valgrind prints out one line for each used suppression, giving the
number of times it got used, its name and the filename and line number where the suppression is defined. Depending
on the suppression kind, the filename and line number are optionally followed by additional information (such as the
number of blocks and bytes suppressed by a Memcheck leak suppression). Here’s the suppressions used by a run of
valgrind -v --tool=memcheck ls -l:
--1610-- used_suppression: 2 dl-hack3-cond-1 /usr/lib/valgrind/default.supp:1234
--1610-- used_suppression: 2 glibc-2.5.x-on-SUSE-10.2-(PPC)-2a /usr/lib/valgrind/default.supp:1234
Multiple suppressions files are allowed. Valgrind loads suppression patterns from $PREFIX/lib/valgrind/default.supp
unless --default-suppressions=no has been specified. You can ask to add suppressions from additional
files by specifying --suppressions=/path/to/file.supp one or more times.
If you want to understand more about suppressions, look at an existing suppressions file whilst reading the following
documentation. The file glibc-2.3.supp, in the source distribution, provides some good examples.
Each suppression has the following components:
First line: its name. This merely gives a handy name to the suppression, by which it is referred to in the summary
of used suppressions printed out when a program finishes. It’s not important what the name is; any identifying
string will do.
7
Using and understanding the Valgrind core
Second line: name of the tool(s) that the suppression is for (if more than one, comma-separated), and the name of
the suppression itself, separated by a colon (n.b.: no spaces are allowed), eg:
tool_name1,tool_name2:suppression_name
Recall that Valgrind is a modular system, in which different instrumentation tools can observe your program whilst it
is running. Since different tools detect different kinds of errors, it is necessary to say which tool(s) the suppression
is meaningful to.
Tools will complain, at startup, if a tool does not understand any suppression directed to it. Tools ignore
suppressions which are not directed to them. As a result, it is quite practical to put suppressions for all tools
into the same suppression file.
Next line: a small number of suppression types have extra information after the second line (eg. the Param
suppression for Memcheck)
Remaining lines: This is the calling context for the error -- the chain of function calls that led to it. There can be
up to 24 of these lines.
Locations may be names of either shared objects, functions, or source lines. They begin with obj:,fun:, or
src: respectively. Function, object, and file names to match against may use the wildcard characters *and ?.
Source lines are specified using the form filename[:lineNumber].
Important note: C++ function names must be mangled. If you are writing suppressions by hand, use the
--demangle=no option to get the mangled names in your error messages. An example of a mangled C++ name
is _ZN9QListView4showEv. This is the form that the GNU C++ compiler uses internally, and the form that
must be used in suppression files. The equivalent demangled name, QListView::show(), is what you see at
the C++ source code level.
A location line may also be simply "..." (three dots). This is a frame-level wildcard, which matches zero or more
frames. Frame level wildcards are useful because they make it easy to ignore varying numbers of uninteresting
frames in between frames of interest. That is often important when writing suppressions which are intended to be
robust against variations in the amount of function inlining done by compilers.
Finally, the entire suppression must be between curly braces. Each brace must be the first character on its own line.
A suppression only suppresses an error when the error matches all the details in the suppression. Here’s an example:
{
__gconv_transform_ascii_internal/__mbrtowc/mbtowc
Memcheck:Value4
fun:__gconv_transform_ascii_internal
fun:__mbr*toc
fun:mbtowc
}
What it means is: for Memcheck only, suppress a use-of-uninitialised-value error, when the data size
is 4, when it occurs in the function __gconv_transform_ascii_internal, when that is called
from any function of name matching __mbr*toc, when that is called from mbtowc. It doesn’t ap-
ply under any other circumstances. The string by which this suppression is identified to the user is
__gconv_transform_ascii_internal/__mbrtowc/mbtowc.
(See Writing suppression files for more details on the specifics of Memcheck’s suppression kinds.)
8
Using and understanding the Valgrind core
Another example, again for the Memcheck tool:
{
libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
Memcheck:Value4
obj:/usr/X11R6/lib/libX11.so.6.2
obj:/usr/X11R6/lib/libX11.so.6.2
obj:/usr/X11R6/lib/libXaw.so.7.0
}
This suppresses any size 4 uninitialised-value error which occurs anywhere in libX11.so.6.2, when called from
anywhere in the same library, when called from anywhere in libXaw.so.7.0. The inexact specification of
locations is regrettable, but is about all you can hope for, given that the X11 libraries shipped on the Linux distro on
which this example was made have had their symbol tables removed.
An example of the src: specification, again for the Memcheck tool:
{
libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
Memcheck:Value4
src:valid.c:321
}
This suppresses any size-4 uninitialised-value error which occurs at line 321 in valid.c.
Although the above two examples do not make this clear, you can freely mix obj:,fun:, and src: lines in a
suppression.
Finally, here’s an example using three frame-level wildcards:
{
a-contrived-example
Memcheck:Leak
fun:malloc
...
fun:ddd
...
fun:ccc
...
fun:main
}
This suppresses Memcheck memory-leak errors, in the case where the allocation was done by main calling (though
any number of intermediaries, including zero) ccc, calling onwards via ddd and eventually to malloc..
2.6. Core Command-line Options
As mentioned above, Valgrind’s core accepts a common set of options. The tools also accept tool-specific options,
which are documented separately for each tool.
9
Using and understanding the Valgrind core
Valgrind’s default settings succeed in giving reasonable behaviour in most cases. We group the available options by
rough categories.
2.6.1. Tool-selection Option
The single most important option.
--tool=<toolname> [default: memcheck]
Run the Valgrind tool called toolname, e.g. memcheck, cachegrind, callgrind, helgrind, drd, massif, lackey, none,
exp-sgcheck, exp-bbv, exp-dhat, etc.
2.6.2. Basic Options
These options work with all tools.
-h --help
Show help for all options, both for the core and for the selected tool. If the option is repeated it is equivalent to giving
--help-debug.
--help-debug
Same as --help, but also lists debugging options which usually are only of use to Valgrind’s developers.
--version
Show the version number of the Valgrind core. Tools can have their own version numbers. There is a scheme in place
to ensure that tools only execute when the core version is one they are known to work with. This was done to minimise
the chances of strange problems arising from tool-vs-core version incompatibilities.
-q,--quiet
Run silently, and only print error messages. Useful if you are running regression tests or have some other automated
test machinery.
-v,--verbose
Be more verbose. Gives extra information on various aspects of your program, such as: the shared objects loaded, the
suppressions used, the progress of the instrumentation and execution engines, and warnings about unusual behaviour.
Repeating the option increases the verbosity level.
--trace-children=<yes|no> [default: no]
When enabled, Valgrind will trace into sub-processes initiated via the exec system call. This is necessary for
multi-process programs.
Note that Valgrind does trace into the child of a fork (it would be difficult not to, since fork makes an identical
copy of a process), so this option is arguably badly named. However, most children of fork calls immediately call
exec anyway.
10
Using and understanding the Valgrind core
--trace-children-skip=patt1,patt2,...
This option only has an effect when --trace-children=yes is specified. It allows for some children to be
skipped. The option takes a comma separated list of patterns for the names of child executables that Valgrind should
not trace into. Patterns may include the metacharacters ?and *, which have the usual meaning.
This can be useful for pruning uninteresting branches from a tree of processes being run on Valgrind. But you should
be careful when using it. When Valgrind skips tracing into an executable, it doesn’t just skip tracing that executable,
it also skips tracing any of that executable’s child processes. In other words, the flag doesn’t merely cause tracing to
stop at the specified executables -- it skips tracing of entire process subtrees rooted at any of the specified executables.
--trace-children-skip-by-arg=patt1,patt2,...
This is the same as --trace-children-skip, with one difference: the decision as to whether to trace into a
child process is made by examining the arguments to the child process, rather than the name of its executable.
--child-silent-after-fork=<yes|no> [default: no]
When enabled, Valgrind will not show any debugging or logging output for the child process resulting from a fork
call. This can make the output less confusing (although more misleading) when dealing with processes that create
children. It is particularly useful in conjunction with --trace-children=. Use of this option is also strongly
recommended if you are requesting XML output (--xml=yes), since otherwise the XML from child and parent may
become mixed up, which usually makes it useless.
--vgdb=<no|yes|full> [default: yes]
Valgrind will provide "gdbserver" functionality when --vgdb=yes or --vgdb=full is specified. This allows
an external GNU GDB debugger to control and debug your program when it runs on Valgrind. --vgdb=full
incurs significant performance overheads, but provides more precise breakpoints and watchpoints. See Debugging
your program using Valgrind’s gdbserver and GDB for a detailed description.
If the embedded gdbserver is enabled but no gdb is currently being used, the vgdb command line utility can send
"monitor commands" to Valgrind from a shell. The Valgrind core provides a set of Valgrind monitor commands. A
tool can optionally provide tool specific monitor commands, which are documented in the tool specific chapter.
--vgdb-error=<number> [default: 999999999]
Use this option when the Valgrind gdbserver is enabled with --vgdb=yes or --vgdb=full. Tools that report
errors will wait for "number" errors to be reported before freezing the program and waiting for you to connect with
GDB. It follows that a value of zero will cause the gdbserver to be started before your program is executed. This is
typically used to insert GDB breakpoints before execution, and also works with tools that do not report errors, such as
Massif.
11
Using and understanding the Valgrind core
--vgdb-stop-at=<set> [default: none]
Use this option when the Valgrind gdbserver is enabled with --vgdb=yes or --vgdb=full. The Valgrind
gdbserver will be invoked for each error after --vgdb-error have been reported. You can additionally ask the
Valgrind gdbserver to be invoked for other events, specified in one of the following ways:
a comma separated list of one or more of startup exit valgrindabexit.
The values startup exit valgrindabexit respectively indicate to invoke gdbserver before your program is
executed, after the last instruction of your program, on Valgrind abnormal exit (e.g. internal error, out of memory,
...).
Note: startup and --vgdb-error=0 will both cause Valgrind gdbserver to be invoked before your program
is executed. The --vgdb-error=0 will in addition cause your program to stop on all subsequent errors.
all to specify the complete set. It is equivalent to --vgdb-stop-at=startup,exit,valgrindabexit.
none for the empty set.
--track-fds=<yes|no> [default: no]
When enabled, Valgrind will print out a list of open file descriptors on exit or on request, via the gdbserver monitor
command v.info open_fds. Along with each file descriptor is printed a stack backtrace of where the file was
opened and any details relating to the file descriptor such as the file name or socket details.
--time-stamp=<yes|no> [default: no]
When enabled, each message is preceded with an indication of the elapsed wallclock time since startup, expressed as
days, hours, minutes, seconds and milliseconds.
--log-fd=<number> [default: 2, stderr]
Specifies that Valgrind should send all of its messages to the specified file descriptor. The default, 2, is the standard
error channel (stderr). Note that this may interfere with the client’s own use of stderr, as Valgrind’s output will be
interleaved with any output that the client sends to stderr.
12
Using and understanding the Valgrind core
--log-file=<filename>
Specifies that Valgrind should send all of its messages to the specified file. If the file name is empty, it causes an
abort. There are three special format specifiers that can be used in the file name.
%p is replaced with the current process ID. This is very useful for program that invoke multiple processes. WARNING:
If you use --trace-children=yes and your program invokes multiple processes OR your program forks without
calling exec afterwards, and you don’t use this specifier (or the %q specifier below), the Valgrind output from all those
processes will go into one file, possibly jumbled up, and possibly incomplete. Note: If the program forks and calls
exec afterwards, Valgrind output of the child from the period between fork and exec will be lost. Fortunately this gap
is really tiny for most programs; and modern programs use posix_spawn anyway.
%n is replaced with a file sequence number unique for this process. This is useful for processes that produces several
files from the same filename template.
%q{FOO} is replaced with the contents of the environment variable FOO. If the {FOO} part is malformed, it causes an
abort. This specifier is rarely needed, but very useful in certain circumstances (eg. when running MPI programs). The
idea is that you specify a variable which will be set differently for each process in the job, for example BPROC_RANK
or whatever is applicable in your MPI setup. If the named environment variable is not set, it causes an abort. Note
that in some shells, the {and }characters may need to be escaped with a backslash.
%% is replaced with %.
If an %is followed by any other character, it causes an abort.
If the file name specifies a relative file name, it is put in the program’s initial working directory: this is the current
directory when the program started its execution after the fork or after the exec. If it specifies an absolute file name
(ie. starts with ’/’) then it is put there.
--log-socket=<ip-address:port-number>
Specifies that Valgrind should send all of its messages to the specified port at the specified IP address. The port
may be omitted, in which case port 1500 is used. If a connection cannot be made to the specified socket, Valgrind
falls back to writing output to the standard error (stderr). This option is intended to be used in conjunction with the
valgrind-listener program. For further details, see the commentary in the manual.
2.6.3. Error-related Options
These options are used by all tools that can report errors, e.g. Memcheck, but not Cachegrind.
13
Using and understanding the Valgrind core
--xml=<yes|no> [default: no]
When enabled, the important parts of the output (e.g. tool error messages) will be in XML format rather than plain
text. Furthermore, the XML output will be sent to a different output channel than the plain text output. Therefore,
you also must use one of --xml-fd,--xml-file or --xml-socket to specify where the XML is to be sent.
Less important messages will still be printed in plain text, but because the XML output and plain text output are sent
to different output channels (the destination of the plain text output is still controlled by --log-fd,--log-file
and --log-socket) this should not cause problems.
This option is aimed at making life easier for tools that consume Valgrind’s output as input, such as GUI front ends.
Currently this option works with Memcheck, Helgrind, DRD and SGcheck. The output format is specified in the file
docs/internals/xml-output-protocol4.txt in the source tree for Valgrind 3.5.0 or later.
The recommended options for a GUI to pass, when requesting XML output, are: --xml=yes to enable XML output,
--xml-file to send the XML output to a (presumably GUI-selected) file, --log-file to send the plain text
output to a second GUI-selected file, --child-silent-after-fork=yes, and -q to restrict the plain text
output to critical error messages created by Valgrind itself. For example, failure to read a specified suppressions file
counts as a critical error message. In this way, for a successful run the text output file will be empty. But if it isn’t
empty, then it will contain important information which the GUI user should be made aware of.
--xml-fd=<number> [default: -1, disabled]
Specifies that Valgrind should send its XML output to the specified file descriptor. It must be used in conjunction
with --xml=yes.
--xml-file=<filename>
Specifies that Valgrind should send its XML output to the specified file. It must be used in conjunction with
--xml=yes. Any %p or %q sequences appearing in the filename are expanded in exactly the same way as they
are for --log-file. See the description of --log-file for details.
--xml-socket=<ip-address:port-number>
Specifies that Valgrind should send its XML output the specified port at the specified IP address. It must be used in
conjunction with --xml=yes. The form of the argument is the same as that used by --log-socket. See the
description of --log-socket for further details.
--xml-user-comment=<string>
Embeds an extra user comment string at the start of the XML output. Only works when --xml=yes is specified;
ignored otherwise.
--demangle=<yes|no> [default: yes]
Enable/disable automatic demangling (decoding) of C++ names. Enabled by default. When enabled, Valgrind will
attempt to translate encoded C++ names back to something approaching the original. The demangler handles symbols
mangled by g++ versions 2.X, 3.X and 4.X.
An important fact about demangling is that function names mentioned in suppressions files should be in their mangled
form. Valgrind does not demangle function names when searching for applicable suppressions, because to do otherwise
would make suppression file contents dependent on the state of Valgrind’s demangling machinery, and also slow down
suppression matching.
14
Using and understanding the Valgrind core
--num-callers=<number> [default: 12]
Specifies the maximum number of entries shown in stack traces that identify program locations. Note that errors
are commoned up using only the top four function locations (the place in the current function, and that of its three
immediate callers). So this doesn’t affect the total number of errors reported.
The maximum value for this is 500. Note that higher settings will make Valgrind run a bit more slowly and take a bit
more memory, but can be useful when working with programs with deeply-nested call chains.
--unw-stack-scan-thresh=<number> [default: 0] ,--unw-stack-scan-frames=<number>
[default: 5]
Stack-scanning support is available only on ARM targets.
These flags enable and control stack unwinding by stack scanning. When the normal stack unwinding mechanisms --
usage of Dwarf CFI records, and frame-pointer following -- fail, stack scanning may be able to recover a stack trace.
Note that stack scanning is an imprecise, heuristic mechanism that may give very misleading results, or none at all.
It should be used only in emergencies, when normal unwinding fails, and it is important to nevertheless have stack
traces.
Stack scanning is a simple technique: the unwinder reads words from the stack, and tries to guess which of them might
be return addresses, by checking to see if they point just after ARM or Thumb call instructions. If so, the word is
added to the backtrace.
The main danger occurs when a function call returns, leaving its return address exposed, and a new function is called,
but the new function does not overwrite the old address. The result of this is that the backtrace may contain entries for
functions which have already returned, and so be very confusing.
A second limitation of this implementation is that it will scan only the page (4KB, normally) containing the starting
stack pointer. If the stack frames are large, this may result in only a few (or not even any) being present in the trace.
Also, if you are unlucky and have an initial stack pointer near the end of its containing page, the scan may miss all
interesting frames.
By default stack scanning is disabled. The normal use case is to ask for it when a stack trace would otherwise be very
short. So, to enable it, use --unw-stack-scan-thresh=number. This requests Valgrind to try using stack
scanning to "extend" stack traces which contain fewer than number frames.
If stack scanning does take place, it will only generate at most the number of frames specified by
--unw-stack-scan-frames. Typically, stack scanning generates so many garbage entries that this value
is set to a low value (5) by default. In no case will a stack trace larger than the value specified by --num-callers
be created.
--error-limit=<yes|no> [default: yes]
When enabled, Valgrind stops reporting errors after 10,000,000 in total, or 1,000 different ones, have been seen. This
is to stop the error tracking machinery from becoming a huge performance overhead in programs with many errors.
--error-exitcode=<number> [default: 0]
Specifies an alternative exit code to return if Valgrind reported any errors in the run. When set to the default value
(zero), the return value from Valgrind will always be the return value of the process being simulated. When set to a
nonzero value, that value is returned instead, if Valgrind detects any errors. This is useful for using Valgrind as part
of an automated test suite, since it makes it easy to detect test cases for which Valgrind has reported errors, just by
inspecting return codes.
15
Using and understanding the Valgrind core
--exit-on-first-error=<yes|no> [default: no]
If this option is enabled, Valgrind exits on the first error. A nonzero exit value must be defined using
--error-exitcode option. Useful if you are running regression tests or have some other automated test
machinery.
--error-markers=<begin>,<end> [default: none]
When errors are output as plain text (i.e. XML not used), --error-markers instructs to output a line containing
the begin (end) string before (after) each error.
Such marker lines facilitate searching for errors and/or extracting errors in an output file that contain valgrind errors
mixed with the program output.
Note that empty markers are accepted. So, only using a begin (or an end) marker is possible.
--sigill-diagnostics=<yes|no> [default: yes]
Enable/disable printing of illegal instruction diagnostics. Enabled by default, but defaults to disabled when --quiet
is given. The default can always be explicitly overridden by giving this option.
When enabled, a warning message will be printed, along with some diagnostics, whenever an instruction is encoun-
tered that Valgrind cannot decode or translate, before the program is given a SIGILL signal. Often an illegal instruction
indicates a bug in the program or missing support for the particular instruction in Valgrind. But some programs do
deliberately try to execute an instruction that might be missing and trap the SIGILL signal to detect processor features.
Using this flag makes it possible to avoid the diagnostic output that you would otherwise get in such cases.
--keep-debuginfo=<yes|no> [default: no]
When enabled, keep ("archive") symbols and all other debuginfo for unloaded code. This allows saved stack traces to
include file/line info for code that has been dlclose’d (or similar). Be careful with this, since it can lead to unbounded
memory use for programs which repeatedly load and unload shared objects.
Some tools and some functionalities have only limited support for archived debug info. Memcheck fully supports
it. Generally, tools that report errors can use archived debug info to show the error stack traces. The known
limitations are: Helgrind’s past access stack trace of a race condition is does not use archived debug info. Massif (and
more generally the xtree Massif output format) does not make use of archived debug info. Only Memcheck has been
(somewhat) tested with --keep-debuginfo=yes, so other tools may have unknown limitations.
--show-below-main=<yes|no> [default: no]
By default, stack traces for errors do not show any functions that appear beneath main because most of the time it’s
uninteresting C library stuff and/or gobbledygook. Alternatively, if main is not present in the stack trace, stack traces
will not show any functions below main-like functions such as glibc’s __libc_start_main. Furthermore, if
main-like functions are present in the trace, they are normalised as (below main), in order to make the output
more deterministic.
If this option is enabled, all stack trace entries will be shown and main-like functions will not be normalised.
16
Using and understanding the Valgrind core
--fullpath-after=<string> [default: don’t show source paths]
By default Valgrind only shows the filenames in stack traces, but not full paths to source files. When using
Valgrind in large projects where the sources reside in multiple different directories, this can be inconvenient.
--fullpath-after provides a flexible solution to this problem. When this option is present, the path to each
source file is shown, with the following all-important caveat: if string is found in the path, then the path up to and
including string is omitted, else the path is shown unmodified. Note that string is not required to be a prefix of
the path.
For example, consider a file named /home/janedoe/blah/src/foo/bar/xyzzy.c. Specify-
ing --fullpath-after=/home/janedoe/blah/src/ will cause Valgrind to show the name as
foo/bar/xyzzy.c.
Because the string is not required to be a prefix, --fullpath-after=src/ will produce the same out-
put. This is useful when the path contains arbitrary machine-generated characters. For example,
the path /my/build/dir/C32A1B47/blah/src/foo/xyzzy can be pruned to foo/xyzzy using
--fullpath-after=/blah/src/.
If you simply want to see the full path, just specify an empty string: --fullpath-after=. This isn’t a special
case, merely a logical consequence of the above rules.
Finally, you can use --fullpath-after multiple times. Any appearance of it causes Valgrind to switch
to producing full paths and applying the above filtering rule. Each produced path is compared against all the
--fullpath-after-specified strings, in the order specified. The first string to match causes the path to be
truncated as described above. If none match, the full path is shown. This facilitates chopping off prefixes when the
sources are drawn from a number of unrelated directories.
--extra-debuginfo-path=<path> [default: undefined and unused]
By default Valgrind searches in several well-known paths for debug objects, such as /usr/lib/debug/.
However, there may be scenarios where you may wish to put debug objects at an arbitrary location, such as external
storage when running Valgrind on a mobile device with limited local storage. Another example might be a situation
where you do not have permission to install debug object packages on the system where you are running Valgrind.
In these scenarios, you may provide an absolute path as an extra, final place for Valgrind to search for debug objects
by specifying --extra-debuginfo-path=/path/to/debug/objects. The given path will be prepended
to the absolute path name of the searched-for object. For example, if Valgrind is looking for the debuginfo
for /w/x/y/zz.so and --extra-debuginfo-path=/a/b/c is specified, it will look for a debug object at
/a/b/c/w/x/y/zz.so.
This flag should only be specified once. If it is specified multiple times, only the last instance is honoured.
17
Using and understanding the Valgrind core
--debuginfo-server=ipaddr:port [default: undefined and unused]
This is a new, experimental, feature introduced in version 3.9.0.
In some scenarios it may be convenient to read debuginfo from objects stored on a different machine. With this flag,
Valgrind will query a debuginfo server running on ipaddr and listening on port port, if it cannot find the debuginfo
object in the local filesystem.
The debuginfo server must accept TCP connections on port port. The debuginfo server is contained in the source
file auxprogs/valgrind-di-server.c. It will only serve from the directory it is started in. port defaults to
1500 in both client and server if not specified.
If Valgrind looks for the debuginfo for /w/x/y/zz.so by using the debuginfo server, it will strip the pathname
components and merely request zz.so on the server. That in turn will look only in its current working directory for
a matching debuginfo object.
The debuginfo data is transmitted in small fragments (8 KB) as requested by Valgrind. Each block is compressed
using LZO to reduce transmission time. The implementation has been tuned for best performance over a single-stage
802.11g (WiFi) network link.
Note that checks for matching primary vs debug objects, using GNU debuglink CRC scheme, are per-
formed even when using the debuginfo server. To disable such checking, you need to also specify
--allow-mismatched-debuginfo=yes.
By default the Valgrind build system will build valgrind-di-server for the target platform, which is almost
certainly not what you want. So far we have been unable to find out how to get automake/autoconf to build it for the
build platform. If you want to use it, you will have to recompile it by hand using the command shown at the top of
auxprogs/valgrind-di-server.c.
--allow-mismatched-debuginfo=no|yes [no]
When reading debuginfo from separate debuginfo objects, Valgrind will by default check that the main and debuginfo
objects match, using the GNU debuglink mechanism. This guarantees that it does not read debuginfo from out of date
debuginfo objects, and also ensures that Valgrind can’t crash as a result of mismatches.
This check can be overridden using --allow-mismatched-debuginfo=yes. This may be useful when the
debuginfo and main objects have not been split in the proper way. Be careful when using this, though: it disables all
consistency checking, and Valgrind has been observed to crash when the main and debuginfo objects don’t match.
--suppressions=<filename> [default: $PREFIX/lib/valgrind/default.supp]
Specifies an extra file from which to read descriptions of errors to suppress. You may use up to 100 extra suppression
files.
18
Using and understanding the Valgrind core
--gen-suppressions=<yes|no|all> [default: no]
When set to yes, Valgrind will pause after every error shown and print the line:
---- Print suppression ? --- [Return/N/n/Y/y/C/c] ----
Pressing Ret, or N Ret or n Ret, causes Valgrind continue execution without printing a suppression for this error.
Pressing Y Ret or y Ret causes Valgrind to write a suppression for this error. You can then cut and paste it into a
suppression file if you don’t want to hear about the error in the future.
When set to all, Valgrind will print a suppression for every reported error, without querying the user.
This option is particularly useful with C++ programs, as it prints out the suppressions with mangled names, as required.
Note that the suppressions printed are as specific as possible. You may want to common up similar ones, by adding
wildcards to function names, and by using frame-level wildcards. The wildcarding facilities are powerful yet flexible,
and with a bit of careful editing, you may be able to suppress a whole family of related errors with only a few
suppressions.
Sometimes two different errors are suppressed by the same suppression, in which case Valgrind will output the
suppression more than once, but you only need to have one copy in your suppression file (but having more than
one won’t cause problems). Also, the suppression name is given as <insert a suppression name here>;
the name doesn’t really matter, it’s only used with the -v option which prints out all used suppression records.
--input-fd=<number> [default: 0, stdin]
When using --gen-suppressions=yes, Valgrind will stop so as to read keyboard input from you when each
error occurs. By default it reads from the standard input (stdin), which is problematic for programs which close stdin.
This option allows you to specify an alternative file descriptor from which to read input.
--dsymutil=no|yes [yes]
This option is only relevant when running Valgrind on Mac OS X.
Mac OS X uses a deferred debug information (debuginfo) linking scheme. When object files containing debuginfo
are linked into a .dylib or an executable, the debuginfo is not copied into the final file. Instead, the debuginfo must
be linked manually by running dsymutil, a system-provided utility, on the executable or .dylib. The resulting
combined debuginfo is placed in a directory alongside the executable or .dylib, but with the extension .dSYM.
With --dsymutil=no, Valgrind will detect cases where the .dSYM directory is either missing, or is present but
does not appear to match the associated executable or .dylib, most likely because it is out of date. In these cases,
Valgrind will print a warning message but take no further action.
With --dsymutil=yes, Valgrind will, in such cases, automatically run dsymutil as necessary to bring the
debuginfo up to date. For all practical purposes, if you always use --dsymutil=yes, then there is never any need
to run dsymutil manually or as part of your applications’s build system, since Valgrind will run it as necessary.
Valgrind will not attempt to run dsymutil on any executable or library in /usr/,/bin/,/sbin/,/opt/,/sw/,
/System/,/Library/ or /Applications/ since dsymutil will always fail in such situations. It fails both
because the debuginfo for such pre-installed system components is not available anywhere, and also because it would
require write privileges in those directories.
Be careful when using --dsymutil=yes, since it will cause pre-existing .dSYM directories to be silently deleted
and re-created. Also note that dsymutil is quite slow, sometimes excessively so.
19
Using and understanding the Valgrind core
--max-stackframe=<number> [default: 2000000]
The maximum size of a stack frame. If the stack pointer moves by more than this amount then Valgrind will assume
that the program is switching to a different stack.
You may need to use this option if your program has large stack-allocated arrays. Valgrind keeps track of your
program’s stack pointer. If it changes by more than the threshold amount, Valgrind assumes your program is
switching to a different stack, and Memcheck behaves differently than it would for a stack pointer change smaller
than the threshold. Usually this heuristic works well. However, if your program allocates large structures on the
stack, this heuristic will be fooled, and Memcheck will subsequently report large numbers of invalid stack accesses.
This option allows you to change the threshold to a different value.
You should only consider use of this option if Valgrind’s debug output directs you to do so. In that case it will tell
you the new threshold you should specify.
In general, allocating large structures on the stack is a bad idea, because you can easily run out of stack space,
especially on systems with limited memory or which expect to support large numbers of threads each with a small
stack, and also because the error checking performed by Memcheck is more effective for heap-allocated data than for
stack-allocated data. If you have to use this option, you may wish to consider rewriting your code to allocate on the
heap rather than on the stack.
--main-stacksize=<number> [default: use current ’ulimit’ value]
Specifies the size of the main thread’s stack.
To simplify its memory management, Valgrind reserves all required space for the main thread’s stack at startup. That
means it needs to know the required stack size at startup.
By default, Valgrind uses the current "ulimit" value for the stack size, or 16 MB, whichever is lower. In many cases
this gives a stack size in the range 8 to 16 MB, which almost never overflows for most applications.
If you need a larger total stack size, use --main-stacksize to specify it. Only set it as high as you need, since
reserving far more space than you need (that is, hundreds of megabytes more than you need) constrains Valgrind’s
memory allocators and may reduce the total amount of memory that Valgrind can use. This is only really of
significance on 32-bit machines.
On Linux, you may request a stack of size up to 2GB. Valgrind will stop with a diagnostic message if the stack cannot
be allocated.
--main-stacksize only affects the stack size for the program’s initial thread. It has no bearing on the size of
thread stacks, as Valgrind does not allocate those.
You may need to use both --main-stacksize and --max-stackframe together. It is important to understand
that --main-stacksize sets the maximum total stack size, whilst --max-stackframe specifies the largest size
of any one stack frame. You will have to work out the --main-stacksize value for yourself (usually, if your
applications segfaults). But Valgrind will tell you the needed --max-stackframe size, if necessary.
As discussed further in the description of --max-stackframe, a requirement for a large stack is a sign of potential
portability problems. You are best advised to place all large data in heap-allocated memory.
20
Using and understanding the Valgrind core
--max-threads=<number> [default: 500]
By default, Valgrind can handle to up to 500 threads. Occasionally, that number is too small. Use this option to provide
a different limit. E.g. --max-threads=3000.
2.6.4. malloc-related Options
For tools that use their own version of malloc (e.g. Memcheck, Massif, Helgrind, DRD), the following options
apply.
--alignment=<number> [default: 8 or 16, depending on the platform]
By default Valgrind’s malloc,realloc, etc, return a block whose starting address is 8-byte aligned or 16-byte
aligned (the value depends on the platform and matches the platform default). This option allows you to specify a
different alignment. The supplied value must be greater than or equal to the default, less than or equal to 4096, and
must be a power of two.
--redzone-size=<number> [default: depends on the tool]
Valgrind’s malloc, realloc, etc, add padding blocks before and after each heap block allocated by the program
being run. Such padding blocks are called redzones. The default value for the redzone size depends on the tool. For
example, Memcheck adds and protects a minimum of 16 bytes before and after each block allocated by the client.
This allows it to detect block underruns or overruns of up to 16 bytes.
Increasing the redzone size makes it possible to detect overruns of larger distances, but increases the amount of
memory used by Valgrind. Decreasing the redzone size will reduce the memory needed by Valgrind but also reduces
the chances of detecting over/underruns, so is not recommended.
--xtree-memory=none|allocs|full [none]
Tools replacing Valgrind’s malloc, realloc, etc, can optionally produce an execution tree detailing which piece
of code is responsible for heap memory usage. See Execution Trees for a detailed explanation about execution trees.
When set to none, no memory execution tree is produced.
When set to allocs, the memory execution tree gives the current number of allocated bytes and the current number
of allocated blocks.
When set to full, the memory execution tree gives 6 different measurements : the current number of allocated bytes
and blocks (same values as for allocs), the total number of allocated bytes and blocks, the total number of freed
bytes and blocks.
Note that the overhead in cpu and memory to produce an xtree depends on the tool. The overhead in cpu is
small for the value allocs, as the information needed to produce this report is maintained in any case by the
tool. For massif and helgrind, specifying full implies to capture a stack trace for each free operation, while
normally these tools only capture an allocation stack trace. For Memcheck, the cpu overhead for the value
full is small, as this can only be used in combination with --keep-stacktraces=alloc-and-free or
--keep-stacktraces=alloc-then-free, which already records a stack trace for each free operation. The
memory overhead varies between 5 and 10 words per unique stacktrace in the xtree, plus the memory needed to record
the stack trace for the free operations, if needed specifically for the xtree.
21
Using and understanding the Valgrind core
--xtree-memory-file=<filename> [default: xtmemory.kcg.%p]
Specifies that Valgrind should produce the xtree memory report in the specified file. Any %p or %q sequences
appearing in the filename are expanded in exactly the same way as they are for --log-file. See the description of
--log-file for details.
If the filename contains the extension .ms, then the produced file format will be a massif output file format. If the
filename contains the extension .kcg or no extension is provided or recognised, then the produced file format will
be a callgrind output format.
See Execution Trees for a detailed explanation about execution trees formats.
2.6.5. Uncommon Options
These options apply to all tools, as they affect certain obscure workings of the Valgrind core. Most people won’t need
to use them.
--smc-check=<none|stack|all|all-non-file> [default: all-non-file for
x86/amd64/s390x, stack for other archs]
This option controls Valgrind’s detection of self-modifying code. If no checking is done, when a program executes
some code, then overwrites it with new code, and executes the new code, Valgrind will continue to execute the
translations it made for the old code. This will likely lead to incorrect behaviour and/or crashes.
For "modern" architectures -- anything that’s not x86, amd64 or s390x -- the default is stack. This is because a
correct program must take explicit action to reestablish D-I cache coherence following code modification. Valgrind
observes and honours such actions, with the result that self-modifying code is transparently handled with zero extra
cost.
For x86, amd64 and s390x, the program is not required to notify the hardware of required D-I coherence syncing.
Hence the default is all-non-file, which covers the normal case of generating code into an anonymous (non-file-
backed) mmap’d area.
The meanings of the four available settings are as follows. No detection (none), detect self-modifying code on the
stack (which is used by GCC to implement nested functions) (stack), detect self-modifying code everywhere (all),
and detect self-modifying code everywhere except in file-backed mappings (all-non-file).
Running with all will slow Valgrind down noticeably. Running with none will rarely speed things up, since
very little code gets dynamically generated in most programs. The VALGRIND_DISCARD_TRANSLATIONS
client request is an alternative to --smc-check=all and --smc-check=all-non-file that requires more
programmer effort but allows Valgrind to run your program faster, by telling it precisely when translations need to be
re-made.
--smc-check=all-non-file provides a cheaper but more limited version of --smc-check=all. It adds
checks to any translations that do not originate from file-backed memory mappings. Typical applications that generate
code, for example JITs in web browsers, generate code into anonymous mmaped areas, whereas the "fixed" code
of the browser always lives in file-backed mappings. --smc-check=all-non-file takes advantage of this
observation, limiting the overhead of checking to code which is likely to be JIT generated.
22
Using and understanding the Valgrind core
--read-inline-info=<yes|no> [default: see below]
When enabled, Valgrind will read information about inlined function calls from DWARF3 debug info. This slows
Valgrind startup and makes it use more memory (typically for each inlined piece of code, 6 words and space for the
function name), but it results in more descriptive stacktraces. For the 3.10.0 release, this functionality is enabled by
default only for Linux, Android and Solaris targets and only for the tools Memcheck, Helgrind and DRD. Here is an
example of some stacktraces with --read-inline-info=no:
==15380== Conditional jump or move depends on uninitialised value(s)
==15380== at 0x80484EA: main (inlinfo.c:6)
==15380==
==15380== Conditional jump or move depends on uninitialised value(s)
==15380== at 0x8048550: fun_noninline (inlinfo.c:6)
==15380== by 0x804850E: main (inlinfo.c:34)
==15380==
==15380== Conditional jump or move depends on uninitialised value(s)
==15380== at 0x8048520: main (inlinfo.c:6)
And here are the same errors with --read-inline-info=yes:
==15377== Conditional jump or move depends on uninitialised value(s)
==15377== at 0x80484EA: fun_d (inlinfo.c:6)
==15377== by 0x80484EA: fun_c (inlinfo.c:14)
==15377== by 0x80484EA: fun_b (inlinfo.c:20)
==15377== by 0x80484EA: fun_a (inlinfo.c:26)
==15377== by 0x80484EA: main (inlinfo.c:33)
==15377==
==15377== Conditional jump or move depends on uninitialised value(s)
==15377== at 0x8048550: fun_d (inlinfo.c:6)
==15377== by 0x8048550: fun_noninline (inlinfo.c:41)
==15377== by 0x804850E: main (inlinfo.c:34)
==15377==
==15377== Conditional jump or move depends on uninitialised value(s)
==15377== at 0x8048520: fun_d (inlinfo.c:6)
==15377== by 0x8048520: main (inlinfo.c:35)
--read-var-info=<yes|no> [default: no]
When enabled, Valgrind will read information about variable types and locations from DWARF3 debug info. This
slows Valgrind startup significantly and makes it use significantly more memory, but for the tools that can take
advantage of it (Memcheck, Helgrind, DRD) it can result in more precise error messages. For example, here are
some standard errors issued by Memcheck:
23
Using and understanding the Valgrind core
==15363== Uninitialised byte(s) found during client check request
==15363== at 0x80484A9: croak (varinfo1.c:28)
==15363== by 0x8048544: main (varinfo1.c:55)
==15363== Address 0x80497f7 is 7 bytes inside data symbol "global_i2"
==15363==
==15363== Uninitialised byte(s) found during client check request
==15363== at 0x80484A9: croak (varinfo1.c:28)
==15363== by 0x8048550: main (varinfo1.c:56)
==15363== Address 0xbea0d0cc is on thread 1’s stack
==15363== in frame #1, created by main (varinfo1.c:45)
And here are the same errors with --read-var-info=yes:
==15370== Uninitialised byte(s) found during client check request
==15370== at 0x80484A9: croak (varinfo1.c:28)
==15370== by 0x8048544: main (varinfo1.c:55)
==15370== Location 0x80497f7 is 0 bytes inside global_i2[7],
==15370== a global variable declared at varinfo1.c:41
==15370==
==15370== Uninitialised byte(s) found during client check request
==15370== at 0x80484A9: croak (varinfo1.c:28)
==15370== by 0x8048550: main (varinfo1.c:56)
==15370== Location 0xbeb4a0cc is 0 bytes inside local var "local"
==15370== declared at varinfo1.c:46, in frame #1 of thread 1
--vgdb-poll=<number> [default: 5000]
As part of its main loop, the Valgrind scheduler will poll to check if some activity (such as an external command or
some input from a gdb) has to be handled by gdbserver. This activity poll will be done after having run the given
number of basic blocks (or slightly more than the given number of basic blocks). This poll is quite cheap so the default
value is set relatively low. You might further decrease this value if vgdb cannot use ptrace system call to interrupt
Valgrind if all threads are (most of the time) blocked in a system call.
--vgdb-shadow-registers=no|yes [default: no]
When activated, gdbserver will expose the Valgrind shadow registers to GDB. With this, the value of the Valgrind
shadow registers can be examined or changed using GDB. Exposing shadow registers only works with GDB version
7.1 or later.
--vgdb-prefix=<prefix> [default: /tmp/vgdb-pipe]
To communicate with gdb/vgdb, the Valgrind gdbserver creates 3 files (2 named FIFOs and a mmap shared memory
file). The prefix option controls the directory and prefix for the creation of these files.
24
Using and understanding the Valgrind core
--run-libc-freeres=<yes|no> [default: yes]
This option is only relevant when running Valgrind on Linux.
The GNU C library (libc.so), which is used by all programs, may allocate memory for its own uses. Usually it
doesn’t bother to free that memory when the program ends—there would be no point, since the Linux kernel reclaims
all process resources when a process exits anyway, so it would just slow things down.
The glibc authors realised that this behaviour causes leak checkers, such as Valgrind, to falsely report leaks in glibc,
when a leak check is done at exit. In order to avoid this, they provided a routine called __libc_freeres
specifically to make glibc release all memory it has allocated. Memcheck therefore tries to run __libc_freeres
at exit.
Unfortunately, in some very old versions of glibc, __libc_freeres is sufficiently buggy to cause segmentation
faults. This was particularly noticeable on Red Hat 7.1. So this option is provided in order to inhibit the run
of __libc_freeres. If your program seems to run fine on Valgrind, but segfaults at exit, you may find that
--run-libc-freeres=no fixes that, although at the cost of possibly falsely reporting space leaks in libc.so.
--run-cxx-freeres=<yes|no> [default: yes]
This option is only relevant when running Valgrind on Linux or Solaris C++ programs.
The GNU Standard C++ library (libstdc++.so), which is used by all C++ programs compiled with g++, may
allocate memory for its own uses. Usually it doesn’t bother to free that memory when the program ends—there would
be no point, since the kernel reclaims all process resources when a process exits anyway, so it would just slow things
down.
The gcc authors realised that this behaviour causes leak checkers, such as Valgrind, to falsely report leaks
in libstdc++, when a leak check is done at exit. In order to avoid this, they provided a routine called
__gnu_cxx::__freeres specifically to make libstdc++ release all memory it has allocated. Memcheck therefore
tries to run __gnu_cxx::__freeres at exit.
For the sake of flexibility and unforeseen problems with __gnu_cxx::__freeres, option
--run-cxx-freeres=no exists, although at the cost of possibly falsely reporting space leaks in
libstdc++.so.
25
Using and understanding the Valgrind core
--sim-hints=hint1,hint2,...
Pass miscellaneous hints to Valgrind which slightly modify the simulated behaviour in nonstandard or dangerous ways,
possibly to help the simulation of strange features. By default no hints are enabled. Use with caution! Currently
known hints are:
lax-ioctls: Be very lax about ioctl handling; the only assumption is that the size is correct. Doesn’t require
the full buffer to be initialised when writing. Without this, using some device drivers with a large number of strange
ioctl commands becomes very tiresome.
fuse-compatible: Enable special handling for certain system calls that may block in a FUSE file-system.
This may be necessary when running Valgrind on a multi-threaded program that uses one thread to manage a FUSE
file-system and another thread to access that file-system.
enable-outer: Enable some special magic needed when the program being run is itself Valgrind.
no-inner-prefix: Disable printing a prefix >in front of each stdout or stderr output line in an inner
Valgrind being run by an outer Valgrind. This is useful when running Valgrind regression tests in an outer/inner
setup. Note that the prefix >will always be printed in front of the inner debug logging lines.
no-nptl-pthread-stackcache: This hint is only relevant when running Valgrind on Linux; it is ignored
on Solaris and Mac OS X.
The GNU glibc pthread library (libpthread.so), which is used by pthread programs, maintains a cache of
pthread stacks. When a pthread terminates, the memory used for the pthread stack and some thread local storage
related data structure are not always directly released. This memory is kept in a cache (up to a certain size), and is
re-used if a new thread is started.
This cache causes the helgrind tool to report some false positive race condition errors on this cached memory, as
helgrind does not understand the internal glibc cache synchronisation primitives. So, when using helgrind, disabling
the cache helps to avoid false positive race conditions, in particular when using thread local storage variables (e.g.
variables using the __thread qualifier).
When using the memcheck tool, disabling the cache ensures the memory used by glibc to handle __thread variables
is directly released when a thread terminates.
Note: Valgrind disables the cache using some internal knowledge of the glibc stack cache implementation and by
examining the debug information of the pthread library. This technique is thus somewhat fragile and might not work
for all glibc versions. This has been successfully tested with various glibc versions (e.g. 2.11, 2.16, 2.18) on various
platforms.
lax-doors: (Solaris only) Be very lax about door syscall handling over unrecognised door file descriptors.
Does not require that full buffer is initialised when writing. Without this, programs using libdoor(3LIB) functional-
ity with completely proprietary semantics may report large number of false positives.
26
Using and understanding the Valgrind core
fallback-llsc: (MIPS and ARM64 only): Enables an alternative implementation of Load-Linked (LL) and
Store-Conditional (SC) instructions. The standard implementation gives more correct behaviour, but can cause
indefinite looping on certain processor implementations that are intolerant of extra memory references between LL
and SC. So far this is known only to happen on Cavium 3 cores. You should not need to use this flag, since the
relevant cores are detected at startup and the alternative implementation is automatically enabled if necessary. There
is no equivalent anti-flag: you cannot force-disable the alternative implementation, if it is automatically enabled.
The underlying problem exists because the "standard" implementation of LL and SC is done by copying through
LL and SC instructions into the instrumented code. However, tools may insert extra instrumentation memory
references in between the LL and SC instructions. These memory references are not present in the original
uninstrumented code, and their presence in the instrumented code can cause the SC instructions to persistently fail,
leading to indefinite looping in LL-SC blocks. The alternative implementation gives correct behaviour of LL and
SC instructions between threads in a process, up to and including the ABA scenario. It also gives correct behaviour
between a Valgrinded thread and a non-Valgrinded thread running in a different process, that communicate via
shared memory, but only up to and including correct CAS behaviour -- in this case the ABA scenario may not be
correctly handled.
--fair-sched=<no|yes|try> [default: no]
The --fair-sched option controls the locking mechanism used by Valgrind to serialise thread execution. The
locking mechanism controls the way the threads are scheduled, and different settings give different trade-offs between
fairness and performance. For more details about the Valgrind thread serialisation scheme and its impact on
performance and thread scheduling, see Scheduling and Multi-Thread Performance.
• The value --fair-sched=yes activates a fair scheduler. In short, if multiple threads are ready to run, the
threads will be scheduled in a round robin fashion. This mechanism is not available on all platforms or Linux
versions. If not available, using --fair-sched=yes will cause Valgrind to terminate with an error.
You may find this setting improves overall responsiveness if you are running an interactive multithreaded program,
for example a web browser, on Valgrind.
• The value --fair-sched=try activates fair scheduling if available on the platform. Otherwise, it will
automatically fall back to --fair-sched=no.
• The value --fair-sched=no activates a scheduler which does not guarantee fairness between threads ready to
run, but which in general gives the highest performance.
--kernel-variant=variant1,variant2,...
Handle system calls and ioctls arising from minor variants of the default kernel for this platform. This is useful for
running on hacked kernels or with kernel modules which support nonstandard ioctls, for example. Use with caution.
If you don’t understand what this option does then you almost certainly don’t need it. Currently known variants are:
bproc: support the sys_broc system call on x86. This is for running on BProc, which is a minor variant of
standard Linux which is sometimes used for building clusters.
android-no-hw-tls: some versions of the Android emulator for ARM do not provide a hardware TLS (thread-
local state) register, and Valgrind crashes at startup. Use this variant to select software support for TLS.
android-gpu-sgx5xx: use this to support handling of proprietary ioctls for the PowerVR SGX 5XX series of
GPUs on Android devices. Failure to select this does not cause stability problems, but may cause Memcheck to
report false errors after the program performs GPU-specific ioctls.
android-gpu-adreno3xx: similarly, use this to support handling of proprietary ioctls for the Qualcomm
Adreno 3XX series of GPUs on Android devices.
27
Using and understanding the Valgrind core
--merge-recursive-frames=<number> [default: 0]
Some recursive algorithms, for example balanced binary tree implementations, create many different stack traces, each
containing cycles of calls. A cycle is defined as two identical program counter values separated by zero or more other
program counter values. Valgrind may then use a lot of memory to store all these stack traces. This is a poor use
of memory considering that such stack traces contain repeated uninteresting recursive calls instead of more interesting
information such as the function that has initiated the recursive call.
The option --merge-recursive-frames=<number> instructs Valgrind to detect and merge recursive call
cycles having a size of up to <number> frames. When such a cycle is detected, Valgrind records the cycle in
the stack trace as a unique program counter.
The value 0 (the default) causes no recursive call merging. A value of 1 will cause stack traces of simple recursive
algorithms (for example, a factorial implementation) to be collapsed. A value of 2 will usually be needed to collapse
stack traces produced by recursive algorithms such as binary trees, quick sort, etc. Higher values might be needed for
more complex recursive algorithms.
Note: recursive calls are detected by analysis of program counter values. They are not detected by looking at function
names.
--num-transtab-sectors=<number> [default: 6 for Android platforms, 16 for all
others]
Valgrind translates and instruments your program’s machine code in small fragments (basic blocks). The translations
are stored in a translation cache that is divided into a number of sections (sectors). If the cache is full, the sector
containing the oldest translations is emptied and reused. If these old translations are needed again, Valgrind must
re-translate and re-instrument the corresponding machine code, which is expensive. If the "executed instructions"
working set of a program is big, increasing the number of sectors may improve performance by reducing the number
of re-translations needed. Sectors are allocated on demand. Once allocated, a sector can never be freed, and occupies
considerable space, depending on the tool and the value of --avg-transtab-entry-size (about 40 MB per
sector for Memcheck). Use the option --stats=yes to obtain precise information about the memory used by a
sector and the allocation and recycling of sectors.
--avg-transtab-entry-size=<number> [default: 0, meaning use tool provided
default]
Average size of translated basic block. This average size is used to dimension the size of a sector. Each tool
provides a default value to be used. If this default value is too small, the translation sectors will become full too
quickly. If this default value is too big, a significant part of the translation sector memory will be unused. Note
that the average size of a basic block translation depends on the tool, and might depend on tool options. For
example, the memcheck option --track-origins=yes increases the size of the basic block translations. Use
--avg-transtab-entry-size to tune the size of the sectors, either to gain memory or to avoid too many
retranslations.
--aspace-minaddr=<address> [default: depends on the platform]
To avoid potential conflicts with some system libraries, Valgrind does not use the address space below
--aspace-minaddr value, keeping it reserved in case a library specifically requests memory in this region.
So, some "pessimistic" value is guessed by Valgrind depending on the platform. On linux, by default, Valgrind
avoids using the first 64MB even if typically there is no conflict in this complete zone. You can use the option
--aspace-minaddr to have your memory hungry application benefitting from more of this lower memory. On
the other hand, if you encounter a conflict, increasing aspace-minaddr value might solve it. Conflicts will typically
manifest themselves with mmap failures in the low range of the address space. The provided address must be page
aligned and must be equal or bigger to 0x1000 (4KB). To find the default value on your platform, do something such
as valgrind -d -d date 2>&1 | grep -i minaddr. Values lower than 0x10000 (64KB) are known to
create problems on some distributions.
28
Using and understanding the Valgrind core
--valgrind-stacksize=<number> [default: 1MB]
For each thread, Valgrind needs its own ’private’ stack. The default size for these stacks is largely dimensioned, and so
should be sufficient in most cases. In case the size is too small, Valgrind will segfault. Before segfaulting, a warning
might be produced by Valgrind when approaching the limit.
Use the option --valgrind-stacksize if such an (unlikely) warning is produced, or Valgrind dies due to a
segmentation violation. Such segmentation violations have been seen when demangling huge C++ symbols.
If your application uses many threads and needs a lot of memory, you can gain some memory by reducing the size of
these Valgrind stacks using the option --valgrind-stacksize.
--show-emwarns=<yes|no> [default: no]
When enabled, Valgrind will emit warnings about its CPU emulation in certain cases. These are usually not
interesting.
--require-text-symbol=:sonamepatt:fnnamepatt
When a shared object whose soname matches sonamepatt is loaded into the process, examine all the text symbols
it exports. If none of those match fnnamepatt, print an error message and abandon the run. This makes it possible
to ensure that the run does not continue unless a given shared object contains a particular function name.
Both sonamepatt and fnnamepatt can be written using the usual ?and *wildcards. For example:
":*libc.so*:foo?bar". You may use characters other than a colon to separate the two patterns. It is
only important that the first character and the separator character are the same. For example, the above example could
also be written "Q*libc.so*Qfoo?bar". Multiple --require-text-symbol flags are allowed, in which
case shared objects that are loaded into the process will be checked against all of them.
The purpose of this is to support reliable usage of marked-up libraries. For example, suppose we have a
version of GCC’s libgomp.so which has been marked up with annotations to support Helgrind. It is only
too easy and confusing to load the wrong, un-annotated libgomp.so into the application. So the idea is:
add a text symbol in the marked-up library, for example annotated_for_helgrind_3_6, and then give
the flag --require-text-symbol=:*libgomp*so*:annotated_for_helgrind_3_6 so that when
libgomp.so is loaded, Valgrind scans its symbol table, and if the symbol isn’t present the run is aborted, rather
than continuing silently with the un-marked-up library. Note that you should put the entire flag in quotes to stop
shells expanding up the *and ?wildcards.
29
Using and understanding the Valgrind core
--soname-synonyms=syn1=pattern1,syn2=pattern2,...
When a shared library is loaded, Valgrind checks for functions in the library that must be replaced or wrapped. For
example, Memcheck replaces some string and memory functions (strchr, strlen, strcpy, memchr, memcpy, memmove,
etc.) with its own versions. Such replacements are normally done only in shared libraries whose soname matches
a predefined soname pattern (e.g. libc.so*on linux). By default, no replacement is done for a statically linked
binary or for alternative libraries, except for the allocation functions (malloc, free, calloc, memalign, realloc, operator
new, operator delete, etc.) Such allocation functions are intercepted by default in any shared library or in the executable
if they are exported as global symbols. This means that if a replacement allocation library such as tcmalloc is found, its
functions are also intercepted by default. In some cases, the replacements allow --soname-synonyms to specify
one additional synonym pattern, giving flexibility in the replacement. Or to prevent interception of all public allocation
symbols.
Currently, this flexibility is only allowed for the malloc related functions, using the synonym somalloc. This
synonym is usable for all tools doing standard replacement of malloc related functions (e.g. memcheck, massif, drd,
helgrind, exp-dhat, exp-sgcheck).
Alternate malloc library: to replace the malloc related functions in a specific alternate library with soname
mymalloclib.so (and not in any others), give the option --soname-synonyms=somalloc=mymalloclib.so.
A pattern can be used to match multiple libraries sonames. For example, --soname-synonyms=somalloc=*tcmalloc*
will match the soname of all variants of the tcmalloc library (native, debug, profiled, ... tcmalloc variants).
Note: the soname of a elf shared library can be retrieved using the readelf utility.
Replacements in a statically linked library are done by using the NONE pattern. For example, if
you link with libtcmalloc.a, and only want to intercept the malloc related functions in the exe-
cutable (and standard libraries) themselves, but not any other shared libraries, you can give the option
--soname-synonyms=somalloc=NONE. Note that a NONE pattern will match the main executable
and any shared library having no soname.
To run a "default" Firefox build for Linux, in which JEMalloc is linked in to the main executable, use
--soname-synonyms=somalloc=NONE.
To only intercept allocation symbols in the default system libraries, but not in any other shared library or
the executable defining public malloc or operator new related functions use a non-existing library name like
--soname-synonyms=somalloc=nouserintercepts (where nouserintercepts can be any non-
existing library name).
Shared library of the dynamic (runtime) linker is excluded from searching for global public symbols, such as those
for the malloc related functions (identified by somalloc synonym).
30
Using and understanding the Valgrind core
--progress-interval=<number> [default: 0, meaning ’disabled’]
This is an enhancement to Valgrind’s debugging output. It is unlikely to be of interest to end users.
When number is set to a non-zero value, Valgrind will print a one-line progress summary every number seconds.
Valid settings for number are between 0 and 3600 inclusive. Here’s some example output with number set to 10:
PROGRESS: U 110s, W 113s, 97.3% CPU, EvC 414.79M, TIn 616.7k, TOut 0.5k, #thr 67
PROGRESS: U 120s, W 124s, 96.8% CPU, EvC 505.27M, TIn 636.6k, TOut 3.0k, #thr 64
PROGRESS: U 130s, W 134s, 97.0% CPU, EvC 574.90M, TIn 657.5k, TOut 3.0k, #thr 63
Each line shows:
U: total user time
W: total wallclock time
CPU: overall average cpu use
EvC: number of event checks. An event check is a backwards branch in the simulated program, so this is a measure
of forward progress of the program
TIn: number of code blocks instrumented by the JIT
TOut: number of instrumented code blocks that have been thrown away
#thr: number of threads in the program
From the progress of these, it is possible to observe:
when the program is compute bound (TIn rises slowly, EvC rises rapidly)
when the program is in a spinloop (TIn/TOut fixed, EvC rises rapidly)
when the program is JIT-bound (TIn rises rapidly)
when the program is rapidly discarding code (TOut rises rapidly)
when the program is about to achieve some expected state (EvC arrives at some value you expect)
when the program is idling (Urises more slowly than W)
2.6.6. Debugging Options
There are also some options for debugging Valgrind itself. You shouldn’t need to use them in the normal run of
things. If you wish to see the list, use the --help-debug option.
If you wish to debug your program rather than debugging Valgrind itself, then you should use the options
--vgdb=yes or --vgdb=full.
2.6.7. Setting Default Options
Note that Valgrind also reads options from three places:
31
Using and understanding the Valgrind core
1. The file ~/.valgrindrc
2. The environment variable $VALGRIND_OPTS
3. The file ./.valgrindrc
These are processed in the given order, before the command-line options. Options processed later override those
processed earlier; for example, options in ./.valgrindrc will take precedence over those in ~/.valgrindrc.
Please note that the ./.valgrindrc file is ignored if it is not a regular file, or is marked as world writeable, or is
not owned by the current user. This is because the ./.valgrindrc can contain options that are potentially harmful
or can be used by a local attacker to execute code under your user account.
Any tool-specific options put in $VALGRIND_OPTS or the .valgrindrc files should be prefixed with the tool
name and a colon. For example, if you want Memcheck to always do leak checking, you can put the following entry
in ~/.valgrindrc:
--memcheck:leak-check=yes
This will be ignored if any tool other than Memcheck is run. Without the memcheck: part, this will cause problems
if you select other tools that don’t understand --leak-check=yes.
2.7. Support for Threads
Threaded programs are fully supported.
The main thing to point out with respect to threaded programs is that your program will use the native threading
library, but Valgrind serialises execution so that only one (kernel) thread is running at a time. This approach avoids
the horrible implementation problems of implementing a truly multithreaded version of Valgrind, but it does mean that
threaded apps never use more than one CPU simultaneously, even if you have a multiprocessor or multicore machine.
Valgrind doesn’t schedule the threads itself. It merely ensures that only one thread runs at once, using a simple
locking scheme. The actual thread scheduling remains under control of the OS kernel. What this does mean, though,
is that your program will see very different scheduling when run on Valgrind than it does when running normally. This
is both because Valgrind is serialising the threads, and because the code runs so much slower than normal.
This difference in scheduling may cause your program to behave differently, if you have some kind of concurrency,
critical race, locking, or similar, bugs. In that case you might consider using the tools Helgrind and/or DRD to track
them down.
On Linux, Valgrind also supports direct use of the clone system call, futex and so on. clone is supported where
either everything is shared (a thread) or nothing is shared (fork-like); partial sharing will fail.
2.7.1. Scheduling and Multi-Thread Performance
A thread executes code only when it holds the abovementioned lock. After executing some number of instructions,
the running thread will release the lock. All threads ready to run will then compete to acquire the lock.
The --fair-sched option controls the locking mechanism used to serialise thread execution.
The default pipe based locking mechanism (--fair-sched=no) is available on all platforms. Pipe based locking
does not guarantee fairness between threads: it is quite likely that a thread that has just released the lock reacquires it
immediately, even though other threads are ready to run. When using pipe based locking, different runs of the same
multithreaded application might give very different thread scheduling.
32
Using and understanding the Valgrind core
An alternative locking mechanism, based on futexes, is available on some platforms. If available, it is activated by
--fair-sched=yes or --fair-sched=try. Futex based locking ensures fairness (round-robin scheduling)
between threads: if multiple threads are ready to run, the lock will be given to the thread which first requested the
lock. Note that a thread which is blocked in a system call (e.g. in a blocking read system call) has not (yet) requested
the lock: such a thread requests the lock only after the system call is finished.
The fairness of the futex based locking produces better reproducibility of thread scheduling for different executions of
a multithreaded application. This better reproducibility is particularly helpful when using Helgrind or DRD.
Valgrind’s use of thread serialisation implies that only one thread at a time may run. On a multiprocessor/multicore
system, the running thread is assigned to one of the CPUs by the OS kernel scheduler. When a thread acquires the
lock, sometimes the thread will be assigned to the same CPU as the thread that just released the lock. Sometimes, the
thread will be assigned to another CPU. When using pipe based locking, the thread that just acquired the lock will
usually be scheduled on the same CPU as the thread that just released the lock. With the futex based mechanism, the
thread that just acquired the lock will more often be scheduled on another CPU.
Valgrind’s thread serialisation and CPU assignment by the OS kernel scheduler can interact badly with the CPU
frequency scaling available on many modern CPUs. To decrease power consumption, the frequency of a CPU or
core is automatically decreased if the CPU/core has not been used recently. If the OS kernel often assigns the thread
which just acquired the lock to another CPU/core, it is quite likely that this CPU/core is currently at a low frequency.
The frequency of this CPU will be increased after some time. However, during this time, the (only) running thread
will have run at the low frequency. Once this thread has run for some time, it will release the lock. Another thread
will acquire this lock, and might be scheduled again on another CPU whose clock frequency was decreased in the
meantime.
The futex based locking causes threads to change CPUs/cores more often. So, if CPU frequency scaling is activated,
the futex based locking might decrease significantly the performance of a multithreaded app running under Valgrind.
Performance losses of up to 50% degradation have been observed, as compared to running on a machine for which
CPU frequency scaling has been disabled. The pipe based locking locking scheme also interacts badly with CPU
frequency scaling, with performance losses in the range 10..20% having been observed.
To avoid such performance degradation, you should indicate to the kernel that all CPUs/cores should always run at
maximum clock speed. Depending on your Linux distribution, CPU frequency scaling may be controlled using a
graphical interface or using command line such as cpufreq-selector or cpufreq-set.
An alternative way to avoid these problems is to tell the OS scheduler to tie a Valgrind process to a specific (fixed)
CPU using the taskset command. This should ensure that the selected CPU does not fall below its maximum
frequency setting so long as any thread of the program has work to do.
2.8. Handling of Signals
Valgrind has a fairly complete signal implementation. It should be able to cope with any POSIX-compliant use of
signals.
If you’re using signals in clever ways (for example, catching SIGSEGV, modifying page state
and restarting the instruction), you’re probably relying on precise exceptions. In this case,
you will need to use --vex-iropt-register-updates=allregs-at-mem-access or
--vex-iropt-register-updates=allregs-at-each-insn.
If your program dies as a result of a fatal core-dumping signal, Valgrind will generate its own core file
(vgcore.NNNNN) containing your program’s state. You may use this core file for post-mortem debugging
with GDB or similar. (Note: it will not generate a core if your core dump size limit is 0.) At the time of writing the
core dumps do not include all the floating point register information.
In the unlikely event that Valgrind itself crashes, the operating system will create a core dump in the usual way.
33
Using and understanding the Valgrind core
2.9. Execution Trees
An execution tree (xtree) is made of a set of stack traces, each stack trace is associated with some resource
consumptions or event counts. Depending on the xtree, different event counts/resource consumptions can be recorded
in the xtree. Multiple tools can produce memory use xtree. Memcheck can output the leak search results in an xtree.
A typical usage for an xtree is to show a graphical or textual representation of the heap usage of a program. The
below figure is a heap usage xtree graphical representation produced by kcachegrind. In the kcachegrind output, you
can see that main current heap usage (allocated indirectly) is 528 bytes : 388 bytes allocated indirectly via a call to
function f1 and 140 bytes indirectly allocated via a call to function f2. f2 has allocated memory by calling g2, while
f1 has allocated memory by calling g11 and g12. g11, g12 and g1 have directly called a memory allocation function
(malloc), and so have a non zero ’Self’ value. Note that when kcachegrind shows an xtree, the ’Called’ column and
call nr indications in the Call Graph are not significant (always set to 0 or 1, independently of the real nr of calls. The
kcachegrind versions >= 0.8.0 do not show anymore such irrelevant xtree call number information.
34
Using and understanding the Valgrind core
35
Using and understanding the Valgrind core
An xtree heap memory report is produced at the end of the execution when required using the option
--xtree-memory. It can also be produced on demand using the xtmemory monitor command (see Val-
grind monitor commands). Currently, an xtree heap memory report can be produced by the memcheck,helgrind
and massif tools.
The xtrees produced by the option --xtree-memory or the xtmemory monitor command are showing the following
events/resource consumption describing heap usage:
curB current number of Bytes allocated. The number of allocated bytes is added to the curB value of a stack trace
for each allocation. It is decreased when a block allocated by this stack trace is released (by another "freeing" stack
trace)
curBk current number of Blocks allocated, maintained similary to curB : +1 for each allocation, -1 when the block
is freed.
totB total allocated Bytes. This is increased for each allocation with the number of allocated bytes.
totBk total allocated Blocks, maintained similary to totB : +1 for each allocation.
totFdB total Freed Bytes, increased each time a block is released by this ("freeing") stack trace : + nr freed bytes
for each free operation.
totFdBk total Freed Blocks, maintained similarly to totFdB : +1 for each free operation.
Note that the last 4 counts are produced only when the --xtree-memory=full was given at startup.
Xtrees can be saved in 2 file formats, the "Callgrind Format" and the "Massif Format".
• Callgrind Format
An xtree file in the Callgrind Format contains a single callgraph, associating each stack trace with the values
recorded in the xtree.
Different Callgrind Format file visualisers are available:
Valgrind distribution includes the callgrind_annotate command line utility that reads in the xtree data, and
prints a sorted lists of functions, optionally with source annotation. Note that due to xtree specificities, you must
give the option --inclusive=yes to callgrind_annotate.
For graphical visualization of the data, you can use KCachegrind, which is a KDE/Qt based GUI that makes it easy
to navigate the large amount of data that an xtree can contain.
• Massif Format
An xtree file in the Massif Format contains one detailed tree callgraph data for each type of event recorded in the
xtree. So, for --xtree-memory=alloc, the output file will contain 2 detailed trees (for the counts curB and
curBk), while --xtree-memory=full will give a file with 6 detailed trees.
Different Massif Format file visualisers are available. Valgrind distribution includes the ms_print command line
utility that produces an easy to read reprentation of a massif output file. See Running Massif and Using Massif and
ms_print for more details about visualising Massif Format output files.
36
Using and understanding the Valgrind core
Note that for equivalent information, the Callgrind Format is more compact than the Massif Format. However, the
Callgrind Format always contains the full data: there is no filtering done during file production, filtering is done by
visualisers such as kcachegrind. kcachegrind is particularly easy to use to analyse big xtree data containing multiple
events counts or resources consumption. The Massif Format (optionally) only contains a part of the data. For example,
the Massif tool might filter some of the data, according to the --threshold option.
To clarify the xtree concept, the below gives several extracts of the output produced by the following commands:
valgrind --xtree-memory=full --xtree-memory-file=xtmemory.kcg mfg
callgrind_annotate --auto=yes --inclusive=yes --sort=curB:100,curBk:100,totB:100,totBk:100,totFdB:100,totFdBk:100 xtmemory.kcg
The below extract shows that the program mfg has allocated in total 770 bytes in 60 different blocks. Of these 60
blocks, 19 were freed, releasing a total of 242 bytes. The heap currently contains 528 bytes in 41 blocks.
--------------------------------------------------------------------------------
curB curBk totB totBk totFdB totFdBk
--------------------------------------------------------------------------------
528 41 770 60 242 19 PROGRAM TOTALS
The below gives more details about which functions have allocated or released memory. As an example, we see that
main has (directly or indirectly) allocated 770 bytes of memory and freed (directly or indirectly) 242 bytes of memory.
The function f1 has (directly or indirectly) allocated 570 bytes of memory, and has not (directly or indirectly) freed
memory. Of the 570 bytes allocated by function f1, 388 bytes (34 blocks) have not been released.
--------------------------------------------------------------------------------
curB curBk totB totBk totFdB totFdBk file:function
--------------------------------------------------------------------------------
528 41 770 60 242 19 mfg.c:main
388 34 570 50 0 0 mfg.c:f1
220 20 330 30 0 0 mfg.c:g11
168 14 240 20 0 0 mfg.c:g12
140 7 200 10 0 0 mfg.c:g2
140 7 200 10 0 0 mfg.c:f2
0 0 0 0 131 10 mfg.c:freeY
0 0 0 0 111 9 mfg.c:freeX
The below gives a more detailed information about the callgraph and which source lines/calls have (directly or
indirectly) allocated or released memory. The below shows that the 770 bytes allocated by main have been indirectly
allocated by calls to f1 and f2. Similarly, we see that the 570 bytes allocated by f1 have been indirectly allocated by
calls to g11 and g12. Of the 330 bytes allocated by the 30 calls to g11, 168 bytes have not been freed. The function
freeY (called once by main) has released in total 10 blocks and 131 bytes.
37
Using and understanding the Valgrind core
--------------------------------------------------------------------------------
-- Auto-annotated source: /home/philippe/valgrind/littleprogs/ + mfg.c
--------------------------------------------------------------------------------
curB curBk totB totBk totFdB totFdBk
....
. . . . . . static void freeY(void)
. . . . . . {
. . . . . . int i;
. . . . . . for (i = 0; i < next_ptr; i++)
. . . . . . if(i % 5 == 0 && ptrs[i] != NULL)
0 0 0 0 131 10 free(ptrs[i]);
. . . . . . }
. . . . . . static void f1(void)
. . . . . . {
. . . . . . int i;
. . . . . . for (i = 0; i < 30; i++)
220 20 330 30 0 0 g11();
. . . . . . for (i = 0; i < 20; i++)
168 14 240 20 0 0 g12();
. . . . . . }
. . . . . . int main()
. . . . . . {
388 34 570 50 0 0 f1();
140 7 200 10 0 0 f2();
0 0 0 0 111 9 freeX();
0 0 0 0 131 10 freeY();
. . . . . . return 0;
. . . . . . }
Heap memory xtrees are helping to understand how your (big) program is using the heap. A full heap memory xtree
helps to pin point some code that allocates a lot of small objects : allocating such small objects might be replaced by
more efficient technique, such as allocating a big block using malloc, and then diviving this block into smaller blocks
in order to decrease the cpu and/or memory overhead of allocating a lot of small blocks. Such full xtree information
complements e.g. what callgrind can show: callgrind can show the number of calls to a function (such as malloc) but
does not indicate the volume of memory allocated (or freed).
A full heap memory xtree also can identify the code that allocates and frees a lot of blocks : the total foot print of the
program might not reflect the fact that the same memory was over and over allocated then released.
Finally, Xtree visualisers such as kcachegrind are helping to identify big memory consumers, in order to possibly
optimise the amount of memory needed by your program.
2.10. Building and Installing Valgrind
We use the standard Unix ./configure,make,make install mechanism. Once you have completed make
install you may then want to run the regression tests with make regtest.
In addition to the usual --prefix=/path/to/install/tree, there are three options which affect how Valgrind
is built:
38
Using and understanding the Valgrind core
--enable-inner
This builds Valgrind with some special magic hacks which make it possible to run it on a standard build of Valgrind
(what the developers call "self-hosting"). Ordinarily you should not use this option as various kinds of safety
checks are disabled.
--enable-only64bit
--enable-only32bit
On 64-bit platforms (amd64-linux, ppc64-linux, amd64-darwin), Valgrind is by default built in such a way that both
32-bit and 64-bit executables can be run. Sometimes this cleverness is a problem for a variety of reasons. These
two options allow for single-target builds in this situation. If you issue both, the configure script will complain.
Note they are ignored on 32-bit-only platforms (x86-linux, ppc32-linux, arm-linux, x86-darwin).
The configure script tests the version of the X server currently indicated by the current $DISPLAY. This is a
known bug. The intention was to detect the version of the current X client libraries, so that correct suppressions could
be selected for them, but instead the test checks the server version. This is just plain wrong.
If you are building a binary package of Valgrind for distribution, please read README_PACKAGERS Readme
Packagers. It contains some important information.
Apart from that, there’s not much excitement here. Let us know if you have build problems.
2.11. If You Have Problems
Contact us at http://www.valgrind.org/.
See Limitations for the known limitations of Valgrind, and for a list of programs which are known not to work on it.
All parts of the system make heavy use of assertions and internal self-checks. They are permanently enabled, and we
have no plans to disable them. If one of them breaks, please mail us!
If you get an assertion failure in m_mallocfree.c, this may have happened because your program wrote off the
end of a heap block, or before its beginning, thus corrupting heap metadata. Valgrind hopefully will have emitted a
message to that effect before dying in this way.
Read the Valgrind FAQ for more advice about common problems, crashes, etc.
2.12. Limitations
The following list of limitations seems long. However, most programs actually work fine.
Valgrind will run programs on the supported platforms subject to the following constraints:
On Linux, Valgrind determines at startup the size of the ’brk segment’ using the RLIMIT_DATA rlim_cur, with a
minimum of 1 MB and a maximum of 8 MB. Valgrind outputs a message each time a program tries to extend the
brk segment beyond the size determined at startup. Most programs will work properly with this limit, typically
by switching to the use of mmap to get more memory. If your program really needs a big brk segment, you must
change the 8 MB hardcoded limit and recompile Valgrind.
39
Using and understanding the Valgrind core
On x86 and amd64, there is no support for 3DNow! instructions. If the translator encounters these, Valgrind will
generate a SIGILL when the instruction is executed. Apart from that, on x86 and amd64, essentially all instructions
are supported, up to and including AVX and AES in 64-bit mode and SSSE3 in 32-bit mode. 32-bit mode does in
fact support the bare minimum SSE4 instructions needed to run programs on MacOSX 10.6 on 32-bit targets.
On ppc32 and ppc64, almost all integer, floating point and Altivec instructions are supported. Specifically: integer
and FP insns that are mandatory for PowerPC, the "General-purpose optional" group (fsqrt, fsqrts, stfiwx), the
"Graphics optional" group (fre, fres, frsqrte, frsqrtes), and the Altivec (also known as VMX) SIMD instruction
set, are supported. Also, instructions from the Power ISA 2.05 specification, as present in POWER6 CPUs, are
supported.
On ARM, essentially the entire ARMv7-A instruction set is supported, in both ARM and Thumb mode. ThumbEE
and Jazelle are not supported. NEON, VFPv3 and ARMv6 media support is fairly complete.
If your program does its own memory management, rather than using malloc/new/free/delete, it should still work,
but Memcheck’s error checking won’t be so effective. If you describe your program’s memory management
scheme using "client requests" (see The Client Request mechanism), Memcheck can do better. Nevertheless, using
malloc/new and free/delete is still the best approach.
Valgrind’s signal simulation is not as robust as it could be. Basic POSIX-compliant sigaction and sigprocmask
functionality is supplied, but it’s conceivable that things could go badly awry if you do weird things with signals.
Workaround: don’t. Programs that do non-POSIX signal tricks are in any case inherently unportable, so should be
avoided if possible.
Machine instructions, and system calls, have been implemented on demand. So it’s possible, although unlikely,
that a program will fall over with a message to that effect. If this happens, please report all the details printed out,
so we can try and implement the missing feature.
Memory consumption of your program is majorly increased whilst running under Valgrind’s Memcheck tool. This
is due to the large amount of administrative information maintained behind the scenes. Another cause is that
Valgrind dynamically translates the original executable. Translated, instrumented code is 12-18 times larger than
the original so you can easily end up with 150+ MB of translations when running (eg) a web browser.
Valgrind can handle dynamically-generated code just fine. If you regenerate code over the top of old code (ie.
at the same memory addresses), if the code is on the stack Valgrind will realise the code has changed, and work
correctly. This is necessary to handle the trampolines GCC uses to implemented nested functions. If you regenerate
code somewhere other than the stack, and you are running on an 32- or 64-bit x86 CPU, you will need to use the
--smc-check=all option, and Valgrind will run more slowly than normal. Or you can add client requests that
tell Valgrind when your program has overwritten code.
On other platforms (ARM, PowerPC) Valgrind observes and honours the cache invalidation hints that programs are
obliged to emit to notify new code, and so self-modifying-code support should work automatically, without the need
for --smc-check=all.
40
Using and understanding the Valgrind core
Valgrind has the following limitations in its implementation of x86/AMD64 floating point relative to IEEE754.
Precision: There is no support for 80 bit arithmetic. Internally, Valgrind represents all such "long double" numbers
in 64 bits, and so there may be some differences in results. Whether or not this is critical remains to be seen. Note,
the x86/amd64 fldt/fstpt instructions (read/write 80-bit numbers) are correctly simulated, using conversions to/from
64 bits, so that in-memory images of 80-bit numbers look correct if anyone wants to see.
The impression observed from many FP regression tests is that the accuracy differences aren’t significant. Generally
speaking, if a program relies on 80-bit precision, there may be difficulties porting it to non x86/amd64 platforms
which only support 64-bit FP precision. Even on x86/amd64, the program may get different results depending on
whether it is compiled to use SSE2 instructions (64-bits only), or x87 instructions (80-bit). The net effect is to
make FP programs behave as if they had been run on a machine with 64-bit IEEE floats, for example PowerPC.
On amd64 FP arithmetic is done by default on SSE2, so amd64 looks more like PowerPC than x86 from an FP
perspective, and there are far fewer noticeable accuracy differences than with x86.
Rounding: Valgrind does observe the 4 IEEE-mandated rounding modes (to nearest, to +infinity, to -infinity, to
zero) for the following conversions: float to integer, integer to float where there is a possibility of loss of precision,
and float-to-float rounding. For all other FP operations, only the IEEE default mode (round to nearest) is supported.
Numeric exceptions in FP code: IEEE754 defines five types of numeric exception that can happen: invalid operation
(sqrt of negative number, etc), division by zero, overflow, underflow, inexact (loss of precision).
For each exception, two courses of action are defined by IEEE754: either (1) a user-defined exception handler may
be called, or (2) a default action is defined, which "fixes things up" and allows the computation to proceed without
throwing an exception.
Currently Valgrind only supports the default fixup actions. Again, feedback on the importance of exception support
would be appreciated.
When Valgrind detects that the program is trying to exceed any of these limitations (setting exception handlers,
rounding mode, or precision control), it can print a message giving a traceback of where this has happened, and
continue execution. This behaviour used to be the default, but the messages are annoying and so showing them is
now disabled by default. Use --show-emwarns=yes to see them.
The above limitations define precisely the IEEE754 ’default’ behaviour: default fixup on all exceptions, round-to-
nearest operations, and 64-bit precision.
Valgrind has the following limitations in its implementation of x86/AMD64 SSE2 FP arithmetic, relative to
IEEE754.
Essentially the same: no exceptions, and limited observance of rounding mode. Also, SSE2 has control bits which
make it treat denormalised numbers as zero (DAZ) and a related action, flush denormals to zero (FTZ). Both of
these cause SSE2 arithmetic to be less accurate than IEEE requires. Valgrind detects, ignores, and can warn about,
attempts to enable either mode.
Valgrind has the following limitations in its implementation of ARM VFPv3 arithmetic, relative to IEEE754.
Essentially the same: no exceptions, and limited observance of rounding mode. Also, switching the VFP unit into
vector mode will cause Valgrind to abort the program -- it has no way to emulate vector uses of VFP at a reasonable
performance level. This is no big deal given that non-scalar uses of VFP instructions are in any case deprecated.
41
Using and understanding the Valgrind core
Valgrind has the following limitations in its implementation of PPC32 and PPC64 floating point arithmetic, relative
to IEEE754.
Scalar (non-Altivec): Valgrind provides a bit-exact emulation of all floating point instructions, except for "fre" and
"fres", which are done more precisely than required by the PowerPC architecture specification. All floating point
operations observe the current rounding mode.
However, fpscr[FPRF] is not set after each operation. That could be done but would give measurable performance
overheads, and so far no need for it has been found.
As on x86/AMD64, IEEE754 exceptions are not supported: all floating point exceptions are handled using the
default IEEE fixup actions. Valgrind detects, ignores, and can warn about, attempts to unmask the 5 IEEE FP
exception kinds by writing to the floating-point status and control register (fpscr).
Vector (Altivec, VMX): essentially as with x86/AMD64 SSE/SSE2: no exceptions, and limited observance of
rounding mode. For Altivec, FP arithmetic is done in IEEE/Java mode, which is more accurate than the Linux
default setting. "More accurate" means that denormals are handled properly, rather than simply being flushed to
zero.
Programs which are known not to work are:
emacs starts up but immediately concludes it is out of memory and aborts. It may be that Memcheck does not
provide a good enough emulation of the mallinfo function. Emacs works fine if you build it to use the standard
malloc/free routines.
2.13. An Example Run
This is the log for a run of a small program using Memcheck. The program is in fact correct, and the reported error is
as the result of a potentially serious code generation bug in GNU g++ (snapshot 20010527).
sewardj@phoenix:~/newmat10$ ~/Valgrind-6/valgrind -v ./bogon
==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1.
==25832== Copyright (C) 2000-2001, and GNU GPL’d, by Julian Seward.
==25832== Startup, with flags:
==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp
==25832== reading syms from /lib/ld-linux.so.2
==25832== reading syms from /lib/libc.so.6
==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0
==25832== reading syms from /lib/libm.so.6
==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3
==25832== reading syms from /home/sewardj/Valgrind/valgrind.so
==25832== reading syms from /proc/self/exe
==25832==
==25832== Invalid read of size 4
==25832== at 0x8048724: BandMatrix::ReSize(int,int,int) (bogon.cpp:45)
==25832== by 0x80487AF: main (bogon.cpp:66)
==25832== Address 0xBFFFF74C is not stack’d, malloc’d or free’d
==25832==
==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==25832== malloc/free: in use at exit: 0 bytes in 0 blocks.
==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
==25832== For a detailed leak analysis, rerun with: --leak-check=yes
42
Using and understanding the Valgrind core
The GCC folks fixed this about a week before GCC 3.0 shipped.
2.14. Warning Messages You Might See
Some of these only appear if you run in verbose mode (enabled by -v):
More than 100 errors detected. Subsequent errors will still be recorded,
but in less detail than before.
After 100 different errors have been shown, Valgrind becomes more conservative about collecting them. It then
requires only the program counters in the top two stack frames to match when deciding whether or not two errors
are really the same one. Prior to this point, the PCs in the top four frames are required to match. This hack has
the effect of slowing down the appearance of new errors after the first 100. The 100 constant can be changed by
recompiling Valgrind.
More than 1000 errors detected. I’m not reporting any more. Final
error counts may be inaccurate. Go fix your program!
After 1000 different errors have been detected, Valgrind ignores any more. It seems unlikely that collecting even
more different ones would be of practical help to anybody, and it avoids the danger that Valgrind spends more
and more of its time comparing new errors against an ever-growing collection. As above, the 1000 number is a
compile-time constant.
Warning: client switching stacks?
Valgrind spotted such a large change in the stack pointer that it guesses the client is switching to a different stack.
At this point it makes a kludgey guess where the base of the new stack is, and sets memory permissions accordingly.
At the moment "large change" is defined as a change of more that 2000000 in the value of the stack pointer register.
If Valgrind guesses wrong, you may get many bogus error messages following this and/or have crashes in the
stack trace recording code. You might avoid these problems by informing Valgrind about the stack bounds using
VALGRIND_STACK_REGISTER client request.
Warning: client attempted to close Valgrind’s logfile fd <number>
Valgrind doesn’t allow the client to close the logfile, because you’d never see any diagnostic information after that
point. If you see this message, you may want to use the --log-fd=<number> option to specify a different
logfile file-descriptor number.
Warning: noted but unhandled ioctl <number>
Valgrind observed a call to one of the vast family of ioctl system calls, but did not modify its memory status
info (because nobody has yet written a suitable wrapper). The call will still have gone through, but you may get
spurious errors after this as a result of the non-update of the memory info.
Warning: set address range perms: large range <number>
Diagnostic message, mostly for benefit of the Valgrind developers, to do with memory permissions.
43
3. Using and understanding the
Valgrind core: Advanced Topics
This chapter describes advanced aspects of the Valgrind core services, which are mostly of interest to power users who
wish to customise and modify Valgrind’s default behaviours in certain useful ways. The subjects covered are:
The "Client Request" mechanism
Debugging your program using Valgrind’s gdbserver and GDB
• Function Wrapping
3.1. The Client Request mechanism
Valgrind has a trapdoor mechanism via which the client program can pass all manner of requests and queries to
Valgrind and the current tool. Internally, this is used extensively to make various things work, although that’s not
visible from the outside.
For your convenience, a subset of these so-called client requests is provided to allow you to tell Valgrind facts about
the behaviour of your program, and also to make queries. In particular, your program can tell Valgrind about things
that it otherwise would not know, leading to better results.
Clients need to include a header file to make this work. Which header file depends on which client requests you use.
Some client requests are handled by the core, and are defined in the header file valgrind/valgrind.h. Tool-
specific header files are named after the tool, e.g. valgrind/memcheck.h. Each tool-specific header file includes
valgrind/valgrind.h so you don’t need to include it in your client if you include a tool-specific header. All
header files can be found in the include/valgrind directory of wherever Valgrind was installed.
The macros in these header files have the magical property that they generate code in-line which Valgrind can spot.
However, the code does nothing when not run on Valgrind, so you are not forced to run your program under Valgrind
just because you use the macros in this file. Also, you are not required to link your program with any extra supporting
libraries.
The code added to your binary has negligible performance impact: on x86, amd64, ppc32, ppc64 and ARM, the
overhead is 6 simple integer instructions and is probably undetectable except in tight loops. However, if you really
wish to compile out the client requests, you can compile with -DNVALGRIND (analogous to -DNDEBUGs effect on
assert).
You are encouraged to copy the valgrind/*.h headers into your project’s include directory, so your program
doesn’t have a compile-time dependency on Valgrind being installed. The Valgrind headers, unlike most of the rest
of the code, are under a BSD-style license so you may include them without worrying about license incompatibility.
Here is a brief description of the macros available in valgrind.h, which work with more than one tool (see the
tool-specific documentation for explanations of the tool-specific macros).
RUNNING_ON_VALGRIND:
Returns 1 if running on Valgrind, 0 if running on the real CPU. If you are running Valgrind on itself, returns the
number of layers of Valgrind emulation you’re running on.
44
Using and understanding the Valgrind core: Advanced Topics
VALGRIND_DISCARD_TRANSLATIONS:
Discards translations of code in the specified address range. Useful if you are debugging a JIT compiler or some
other dynamic code generation system. After this call, attempts to execute code in the invalidated address range will
cause Valgrind to make new translations of that code, which is probably the semantics you want. Note that code
invalidations are expensive because finding all the relevant translations quickly is very difficult, so try not to call it
often. Note that you can be clever about this: you only need to call it when an area which previously contained code is
overwritten with new code. You can choose to write code into fresh memory, and just call this occasionally to discard
large chunks of old code all at once.
Alternatively, for transparent self-modifying-code support, use--smc-check=all, or run on ppc32/Linux,
ppc64/Linux or ARM/Linux.
VALGRIND_COUNT_ERRORS:
Returns the number of errors found so far by Valgrind. Can be useful in test harness code when combined with
the --log-fd=-1 option; this runs Valgrind silently, but the client program can detect when errors occur. Only
useful for tools that report errors, e.g. it’s useful for Memcheck, but for Cachegrind it will always return zero because
Cachegrind doesn’t report errors.
VALGRIND_MALLOCLIKE_BLOCK:
If your program manages its own memory instead of using the standard malloc /new /new[], tools that track
information about heap blocks will not do nearly as good a job. For example, Memcheck won’t detect nearly as
many errors, and the error messages won’t be as informative. To improve this situation, use this macro just after your
custom allocator allocates some new memory. See the comments in valgrind.h for information on how to use it.
VALGRIND_FREELIKE_BLOCK:
This should be used in conjunction with VALGRIND_MALLOCLIKE_BLOCK. Again, see valgrind.h for infor-
mation on how to use it.
VALGRIND_RESIZEINPLACE_BLOCK:
Informs a Valgrind tool that the size of an allocated block has been modified but not its address. See valgrind.h
for more information on how to use it.
VALGRIND_CREATE_MEMPOOL,VALGRIND_DESTROY_MEMPOOL,VALGRIND_MEMPOOL_ALLOC,
VALGRIND_MEMPOOL_FREE,VALGRIND_MOVE_MEMPOOL,VALGRIND_MEMPOOL_CHANGE,
VALGRIND_MEMPOOL_EXISTS:
These are similar to VALGRIND_MALLOCLIKE_BLOCK and VALGRIND_FREELIKE_BLOCK but are tailored
towards code that uses memory pools. See Memory Pools for a detailed description.
VALGRIND_NON_SIMD_CALL[0123]:
Executes a function in the client program on the real CPU, not the virtual CPU that Valgrind normally runs code on.
The function must take an integer (holding a thread ID) as the first argument and then 0, 1, 2 or 3 more arguments
(depending on which client request is used). These are used in various ways internally to Valgrind. They might be
useful to client programs.
Warning: Only use these if you really know what you are doing. They aren’t entirely reliable, and can cause Valgrind
to crash. See valgrind.h for more details.
45
Using and understanding the Valgrind core: Advanced Topics
VALGRIND_PRINTF(format, ...):
Print a printf-style message to the Valgrind log file. The message is prefixed with the PID between a pair of **
markers. (Like all client requests, nothing is output if the client program is not running under Valgrind.) Output is not
produced until a newline is encountered, or subsequent Valgrind output is printed; this allows you to build up a single
line of output over multiple calls. Returns the number of characters output, excluding the PID prefix.
VALGRIND_PRINTF_BACKTRACE(format, ...):
Like VALGRIND_PRINTF (in particular, the return value is identical), but prints a stack backtrace immediately
afterwards.
VALGRIND_MONITOR_COMMAND(command):
Execute the given monitor command (a string). Returns 0 if command is recognised. Returns 1 if command
is not recognised. Note that some monitor commands provide access to a functionality also accessible via a
specific client request. For example, memcheck leak search can be requested from the client program using
VALGRIND_DO_LEAK_CHECK or via the monitor command "leak_search". Note that the syntax of the command
string is only verified at run-time. So, if it exists, it is preferable to use a specific client request to have better compile
time verifications of the arguments.
VALGRIND_STACK_REGISTER(start, end):
Registers a new stack. Informs Valgrind that the memory range between start and end is a unique stack. Returns a
stack identifier that can be used with other VALGRIND_STACK_*calls.
Valgrind will use this information to determine if a change to the stack pointer is an item pushed onto the stack or a
change over to a new stack. Use this if you’re using a user-level thread package and are noticing crashes in stack trace
recording or spurious errors from Valgrind about uninitialized memory reads.
Warning: Unfortunately, this client request is unreliable and best avoided.
VALGRIND_STACK_DEREGISTER(id):
Deregisters a previously registered stack. Informs Valgrind that previously registered memory range with stack id id
is no longer a stack.
Warning: Unfortunately, this client request is unreliable and best avoided.
46
Using and understanding the Valgrind core: Advanced Topics
VALGRIND_STACK_CHANGE(id, start, end):
Changes a previously registered stack. Informs Valgrind that the previously registered stack with stack id id has
changed its start and end values. Use this if your user-level thread package implements stack growth.
Warning: Unfortunately, this client request is unreliable and best avoided.
3.2. Debugging your program using Valgrind
gdbserver and GDB
A program running under Valgrind is not executed directly by the CPU. Instead it runs on a synthetic CPU provided
by Valgrind. This is why a debugger cannot debug your program when it runs on Valgrind.
This section describes how GDB can interact with the Valgrind gdbserver to provide a fully debuggable program under
Valgrind. Used in this way, GDB also provides an interactive usage of Valgrind core or tool functionalities, including
incremental leak search under Memcheck and on-demand Massif snapshot production.
3.2.1. Quick Start: debugging in 3 steps
The simplest way to get started is to run Valgrind with the flag --vgdb-error=0. Then follow the on-screen
directions, which give you the precise commands needed to start GDB and connect it to your program.
Otherwise, here’s a slightly more verbose overview.
If you want to debug a program with GDB when using the Memcheck tool, start Valgrind like this:
valgrind --vgdb=yes --vgdb-error=0 prog
In another shell, start GDB:
gdb prog
Then give the following command to GDB:
(gdb) target remote | vgdb
You can now debug your program e.g. by inserting a breakpoint and then using the GDB continue command.
This quick start information is enough for basic usage of the Valgrind gdbserver. The sections below describe
more advanced functionality provided by the combination of Valgrind and GDB. Note that the command line flag
--vgdb=yes can be omitted, as this is the default value.
3.2.2. Valgrind gdbserver overall organisation
47
Using and understanding the Valgrind core: Advanced Topics
The GNU GDB debugger is typically used to debug a process running on the same machine. In this mode, GDB uses
system calls to control and query the program being debugged. This works well, but only allows GDB to debug a
program running on the same computer.
GDB can also debug processes running on a different computer. To achieve this, GDB defines a protocol (that is, a
set of query and reply packets) that facilitates fetching the value of memory or registers, setting breakpoints, etc. A
gdbserver is an implementation of this "GDB remote debugging" protocol. To debug a process running on a remote
computer, a gdbserver (sometimes called a GDB stub) must run at the remote computer side.
The Valgrind core provides a built-in gdbserver implementation, which is activated using --vgdb=yes or
--vgdb=full. This gdbserver allows the process running on Valgrind’s synthetic CPU to be debugged remotely.
GDB sends protocol query packets (such as "get register contents") to the Valgrind embedded gdbserver. The gdb-
server executes the queries (for example, it will get the register values of the synthetic CPU) and gives the results back
to GDB.
GDB can use various kinds of channels (TCP/IP, serial line, etc) to communicate with the gdbserver. In the case
of Valgrind’s gdbserver, communication is done via a pipe and a small helper program called vgdb, which acts as an
intermediary. If no GDB is in use, vgdb can also be used to send monitor commands to the Valgrind gdbserver from
a shell command line.
3.2.3. Connecting GDB to a Valgrind gdbserver
To debug a program "prog" running under Valgrind, you must ensure that the Valgrind gdbserver is
activated by specifying either --vgdb=yes or --vgdb=full. A secondary command line option,
--vgdb-error=number, can be used to tell the gdbserver only to become active once the specified number of
errors have been shown. A value of zero will therefore cause the gdbserver to become active at startup, which allows
you to insert breakpoints before starting the run. For example:
valgrind --tool=memcheck --vgdb=yes --vgdb-error=0 ./prog
The Valgrind gdbserver is invoked at startup and indicates it is waiting for a connection from a GDB:
==2418== Memcheck, a memory error detector
==2418== Copyright (C) 2002-2017, and GNU GPL’d, by Julian Seward et al.
==2418== Using Valgrind-3.14.0.GIT and LibVEX; rerun with -h for copyright info
==2418== Command: ./prog
==2418==
==2418== (action at startup) vgdb me ...
GDB (in another shell) can then be connected to the Valgrind gdbserver. For this, GDB must be started on the program
prog:
gdb ./prog
You then indicate to GDB that you want to debug a remote target:
48
Using and understanding the Valgrind core: Advanced Topics
(gdb) target remote | vgdb
GDB then starts a vgdb relay application to communicate with the Valgrind embedded gdbserver:
(gdb) target remote | vgdb
Remote debugging using | vgdb
relaying data between gdb and process 2418
Reading symbols from /lib/ld-linux.so.2...done.
Reading symbols from /usr/lib/debug/lib/ld-2.11.2.so.debug...done.
Loaded symbols for /lib/ld-linux.so.2
[Switching to Thread 2418]
0x001f2850 in _start () from /lib/ld-linux.so.2
(gdb)
Note that vgdb is provided as part of the Valgrind distribution. You do not need to install it separately.
If vgdb detects that there are multiple Valgrind gdbservers that can be connected to, it will list all such servers and
their PIDs, and then exit. You can then reissue the GDB "target" command, but specifying the PID of the process you
want to debug:
(gdb) target remote | vgdb
Remote debugging using | vgdb
no --pid= arg given and multiple valgrind pids found:
use --pid=2479 for valgrind --tool=memcheck --vgdb=yes --vgdb-error=0 ./prog
use --pid=2481 for valgrind --tool=memcheck --vgdb=yes --vgdb-error=0 ./prog
use --pid=2483 for valgrind --vgdb=yes --vgdb-error=0 ./another_prog
Remote communication error: Resource temporarily unavailable.
(gdb) target remote | vgdb --pid=2479
Remote debugging using | vgdb --pid=2479
relaying data between gdb and process 2479
Reading symbols from /lib/ld-linux.so.2...done.
Reading symbols from /usr/lib/debug/lib/ld-2.11.2.so.debug...done.
Loaded symbols for /lib/ld-linux.so.2
[Switching to Thread 2479]
0x001f2850 in _start () from /lib/ld-linux.so.2
(gdb)
Once GDB is connected to the Valgrind gdbserver, it can be used in the same way as if you were debugging the
program natively:
Breakpoints can be inserted or deleted.
Variables and register values can be examined or modified.
Signal handling can be configured (printing, ignoring).
49
Using and understanding the Valgrind core: Advanced Topics
Execution can be controlled (continue, step, next, stepi, etc).
Program execution can be interrupted using Control-C.
And so on. Refer to the GDB user manual for a complete description of GDB’s functionality.
3.2.4. Connecting to an Android gdbserver
When developping applications for Android, you will typically use a development system (on which the Android NDK
is installed) to compile your application. An Android target system or emulator will be used to run the application. In
this setup, Valgrind and vgdb will run on the Android system, while GDB will run on the development system. GDB
will connect to the vgdb running on the Android system using the Android NDK ’adb forward’ application.
Example: on the Android system, execute the following:
valgrind --vgdb-error=0 --vgdb=yes prog
# and then in another shell, run:
vgdb --port=1234
On the development system, execute the following commands:
adb forward tcp:1234 tcp:1234
gdb prog
(gdb) target remote :1234
GDB will use a local tcp/ip connection to connect to the Android adb forwarder. Adb will establish a relay connection
between the host system and the Android target system. Be sure to use the GDB delivered in the Android NDK system
(typically, arm-linux-androideabi-gdb), as the host GDB is probably not able to debug Android arm applications. Note
that the local port nr (used by GDB) must not necessarily be equal to the port number used by vgdb: adb can forward
tcp/ip between different port numbers.
In the current release, the GDB server is not enabled by default for Android, due to problems in establishing a suitable
directory in which Valgrind can create the necessary FIFOs (named pipes) for communication purposes. You can stil
try to use the GDB server, but you will need to explicitly enable it using the flag --vgdb=yes or --vgdb=full.
Additionally, you will need to select a temporary directory which is (a) writable by Valgrind, and (b) supports FIFOs.
This is the main difficult point. Often, /sdcard satisfies requirement (a), but fails for (b) because it is a VFAT file
system and VFAT does not support pipes. Possibilities you could try are /data/local,/data/local/Inst (if
you installed Valgrind there), or /data/data/name.of.my.app, if you are running a specific application and it
has its own directory of that form. This last possibility may have the highest probability of success.
You can specify the temporary directory to use either via the --with-tmpdir= configure time flag, or by setting
environment variable TMPDIR when running Valgrind (on the Android device, not on the Android NDK development
host). Another alternative is to specify the directory for the FIFOs using the --vgdb-prefix= Valgrind command
line option.
We hope to have a better story for temporary directory handling on Android in the future. The difficulty is that, unlike
in standard Unixes, there is no single temporary file directory that reliably works across all devices and scenarios.
50
Using and understanding the Valgrind core: Advanced Topics
3.2.5. Monitor command handling by the Valgrind
gdbserver
The Valgrind gdbserver provides additional Valgrind-specific functionality via "monitor commands". Such monitor
commands can be sent from the GDB command line or from the shell command line or requested by the client program
using the VALGRIND_MONITOR_COMMAND client request. See Valgrind monitor commands for the list of the
Valgrind core monitor commands available regardless of the Valgrind tool selected.
The following tools provide tool-specific monitor commands:
Memcheck Monitor Commands
Callgrind Monitor Commands
Massif Monitor Commands
Helgrind Monitor Commands
An example of a tool specific monitor command is the Memcheck monitor command leak_check full
reachable any. This requests a full reporting of the allocated memory blocks. To have this leak check executed,
use the GDB command:
(gdb) monitor leak_check full reachable any
GDB will send the leak_check command to the Valgrind gdbserver. The Valgrind gdbserver will execute the
monitor command itself, if it recognises it to be a Valgrind core monitor command. If it is not recognised as such, it
is assumed to be tool-specific and is handed to the tool for execution. For example:
(gdb) monitor leak_check full reachable any
==2418== 100 bytes in 1 blocks are still reachable in loss record 1 of 1
==2418== at 0x4006E9E: malloc (vg_replace_malloc.c:236)
==2418== by 0x804884F: main (prog.c:88)
==2418==
==2418== LEAK SUMMARY:
==2418== definitely lost: 0 bytes in 0 blocks
==2418== indirectly lost: 0 bytes in 0 blocks
==2418== possibly lost: 0 bytes in 0 blocks
==2418== still reachable: 100 bytes in 1 blocks
==2418== suppressed: 0 bytes in 0 blocks
==2418==
(gdb)
As with other GDB commands, the Valgrind gdbserver will accept abbreviated monitor command names and
arguments, as long as the given abbreviation is unambiguous. For example, the above leak_check command
can also be typed as:
51
Using and understanding the Valgrind core: Advanced Topics
(gdb) mo l f r a
The letters mo are recognised by GDB as being an abbreviation for monitor. So GDB sends the string lfrato
the Valgrind gdbserver. The letters provided in this string are unambiguous for the Valgrind gdbserver. This therefore
gives the same output as the unabbreviated command and arguments. If the provided abbreviation is ambiguous, the
Valgrind gdbserver will report the list of commands (or argument values) that can match:
(gdb) mo v. n
v. can match v.set v.info v.wait v.kill v.translate v.do
(gdb) mo v.i n
n_errs_found 0 n_errs_shown 0 (vgdb-error 0)
(gdb)
Instead of sending a monitor command from GDB, you can also send these from a shell command line. For example,
the following command lines, when given in a shell, will cause the same leak search to be executed by the process
3145:
vgdb --pid=3145 leak_check full reachable any
vgdb --pid=3145 l f r a
Note that the Valgrind gdbserver automatically continues the execution of the program after a standalone invocation of
vgdb. Monitor commands sent from GDB do not cause the program to continue: the program execution is controlled
explicitly using GDB commands such as "continue" or "next".
3.2.6. Valgrind gdbserver thread information
Valgrind’s gdbserver enriches the output of the GDB info threads command with Valgrind-specific information.
The operating system’s thread number is followed by Valgrind’s internal index for that thread ("tid") and by the
Valgrind scheduler thread state:
(gdb) info threads
4 Thread 6239 (tid 4 VgTs_Yielding) 0x001f2832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
*3 Thread 6238 (tid 3 VgTs_Runnable) make_error (s=0x8048b76 "called from London") at prog.c:20
2 Thread 6237 (tid 2 VgTs_WaitSys) 0x001f2832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
1 Thread 6234 (tid 1 VgTs_Yielding) main (argc=1, argv=0xbedcc274) at prog.c:105
(gdb)
52
Using and understanding the Valgrind core: Advanced Topics
3.2.7. Examining and modifying Valgrind shadow
registers
When the option --vgdb-shadow-registers=yes is given, the Valgrind gdbserver will let GDB examine
and/or modify Valgrind’s shadow registers. GDB version 7.1 or later is needed for this to work. For x86 and
amd64, GDB version 7.2 or later is needed.
For each CPU register, the Valgrind core maintains two shadow register sets. These shadow registers can be accessed
from GDB by giving a postfix s1 or s2 for respectively the first and second shadow register. For example, the x86
register eax and its two shadows can be examined using the following commands:
(gdb) p $eax
$1 = 0
(gdb) p $eaxs1
$2 = 0
(gdb) p $eaxs2
$3 = 0
(gdb)
Float shadow registers are shown by GDB as unsigned integer values instead of float values, as it is expected that these
shadow values are mostly used for memcheck validity bits.
Intel/amd64 AVX registers ymm0 to ymm15 have also their shadow registers. However, GDB presents the shadow
values using two "half" registers. For example, the half shadow registers for ymm9 are xmm9s1 (lower half for set 1),
ymm9hs1 (upper half for set 1), xmm9s2 (lower half for set 2), ymm9hs2 (upper half for set 2). Note the inconsistent
notation for the names of the half registers: the lower part starts with an x, the upper part starts with an yand has an
hbefore the shadow postfix.
The special presentation of the AVX shadow registers is due to the fact that GDB independently retrieves the lower
and upper half of the ymm registers. GDB does not however know that the shadow half registers have to be shown
combined.
3.2.8. Limitations of the Valgrind gdbserver
Debugging with the Valgrind gdbserver is very similar to native debugging. Valgrind’s gdbserver implementation is
quite complete, and so provides most of the GDB debugging functionality. There are however some limitations and
peculiarities:
Precision of "stop-at" commands.
GDB commands such as "step", "next", "stepi", breakpoints and watchpoints, will stop the execution of the process.
With the option --vgdb=yes, the process might not stop at the exact requested instruction. Instead, it might
continue execution of the current basic block and stop at one of the following basic blocks. This is linked to the fact
that Valgrind gdbserver has to instrument a block to allow stopping at the exact instruction requested. Currently,
re-instrumentation of the block currently being executed is not supported. So, if the action requested by GDB (e.g.
single stepping or inserting a breakpoint) implies re-instrumentation of the current block, the GDB action may not
be executed precisely.
This limitation applies when the basic block currently being executed has not yet been instrumented for debugging.
This typically happens when the gdbserver is activated due to the tool reporting an error or to a watchpoint. If the
53
Using and understanding the Valgrind core: Advanced Topics
gdbserver block has been activated following a breakpoint, or if a breakpoint has been inserted in the block before
its execution, then the block has already been instrumented for debugging.
If you use the option --vgdb=full, then GDB "stop-at" commands will be obeyed precisely. The downside
is that this requires each instruction to be instrumented with an additional call to a gdbserver helper function,
which gives considerable overhead (+500% for memcheck) compared to --vgdb=no. Option --vgdb=yes has
neglectible overhead compared to --vgdb=no.
Processor registers and flags values.
When Valgrind gdbserver stops on an error, on a breakpoint or when single stepping, registers and flags val-
ues might not be always up to date due to the optimisations done by the Valgrind core. The default value
--vex-iropt-register-updates=unwindregs-at-mem-access ensures that the registers needed to
make a stack trace (typically PC/SP/FP) are up to date at each memory access (i.e. memory exception points).
Disabling some optimisations using the following values will increase the precision of registers and flags values (a
typical performance impact for memcheck is given for each option).
--vex-iropt-register-updates=allregs-at-mem-access (+10%) ensures that all registers and
flags are up to date at each memory access.
--vex-iropt-register-updates=allregs-at-each-insn (+25%) ensures that all registers and
flags are up to date at each instruction.
Note that --vgdb=full (+500%, see above Precision of "stop-at" commands) automatically activates
--vex-iropt-register-updates=allregs-at-each-insn.
Hardware watchpoint support by the Valgrind gdbserver.
The Valgrind gdbserver can simulate hardware watchpoints if the selected tool provides support for it. Currently,
only Memcheck provides hardware watchpoint simulation. The hardware watchpoint simulation provided by
Memcheck is much faster that GDB software watchpoints, which are implemented by GDB checking the value
of the watched zone(s) after each instruction. Hardware watchpoint simulation also provides read watchpoints.
The hardware watchpoint simulation by Memcheck has some limitations compared to real hardware watchpoints.
However, the number and length of simulated watchpoints are not limited.
Typically, the number of (real) hardware watchpoints is limited. For example, the x86 architecture supports a
maximum of 4 hardware watchpoints, each watchpoint watching 1, 2, 4 or 8 bytes. The Valgrind gdbserver does
not have any limitation on the number of simulated hardware watchpoints. It also has no limitation on the length of
the memory zone being watched. Using GDB version 7.4 or later allow full use of the flexibility of the Valgrind
gdbserver’s simulated hardware watchpoints. Previous GDB versions do not understand that Valgrind gdbserver
watchpoints have no length limit.
Memcheck implements hardware watchpoint simulation by marking the watched address ranges as being unad-
dressable. When a hardware watchpoint is removed, the range is marked as addressable and defined. Hardware
watchpoint simulation of addressable-but-undefined memory zones works properly, but has the undesirable side
effect of marking the zone as defined when the watchpoint is removed.
Write watchpoints might not be reported at the exact instruction that writes the monitored area, unless option
--vgdb=full is given. Read watchpoints will always be reported at the exact instruction reading the watched
memory.
It is better to avoid using hardware watchpoint of not addressable (yet) memory: in such a case, GDB will fall
back to extremely slow software watchpoints. Also, if you do not quit GDB between two debugging sessions, the
hardware watchpoints of the previous sessions will be re-inserted as software watchpoints if the watched memory
zone is not addressable at program startup.
54
Using and understanding the Valgrind core: Advanced Topics
Stepping inside shared libraries on ARM.
For unknown reasons, stepping inside shared libraries on ARM may fail. A workaround is to use the ldd command
to find the list of shared libraries and their loading address and inform GDB of the loading address using the GDB
command "add-symbol-file". Example:
(gdb) shell ldd ./prog
libc.so.6 => /lib/libc.so.6 (0x4002c000)
/lib/ld-linux.so.3 (0x40000000)
(gdb) add-symbol-file /lib/libc.so.6 0x4002c000
add symbol table from file "/lib/libc.so.6" at
.text_addr = 0x4002c000
(y or n) y
Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
(gdb)
GDB version needed for ARM and PPC32/64.
You must use a GDB version which is able to read XML target description sent by a gdbserver. This is the standard
setup if GDB was configured and built with the "expat" library. If your GDB was not configured with XML support,
it will report an error message when using the "target" command. Debugging will not work because GDB will then
not be able to fetch the registers from the Valgrind gdbserver. For ARM programs using the Thumb instruction set,
you must use a GDB version of 7.1 or later, as earlier versions have problems with next/step/breakpoints in Thumb
code.
Stack unwinding on PPC32/PPC64.
On PPC32/PPC64, stack unwinding for leaf functions (functions that do not call any other functions) works
properly only when you give the option --vex-iropt-register-updates=allregs-at-mem-access
or --vex-iropt-register-updates=allregs-at-each-insn. You must also pass this option in
order to get a precise stack when a signal is trapped by GDB.
Breakpoints encountered multiple times.
Some instructions (e.g. x86 "rep movsb") are translated by Valgrind using a loop. If a breakpoint is placed on
such an instruction, the breakpoint will be encountered multiple times -- once for each step of the "implicit" loop
implementing the instruction.
Execution of Inferior function calls by the Valgrind gdbserver.
GDB allows the user to "call" functions inside the process being debugged. Such calls are named "inferior calls" in
the GDB terminology. A typical use of an inferior call is to execute a function that prints a human-readable version
of a complex data structure. To make an inferior call, use the GDB "print" command followed by the function to
call and its arguments. As an example, the following GDB command causes an inferior call to the libc "printf"
function to be executed by the process being debugged:
(gdb) p printf("process being debugged has pid %d\n", getpid())
$5 = 36
(gdb)
The Valgrind gdbserver supports inferior function calls. Whilst an inferior call is running, the Valgrind tool will
report errors as usual. If you do not want to have such errors stop the execution of the inferior call, you can use
55
Using and understanding the Valgrind core: Advanced Topics
v.set vgdb-error to set a big value before the call, then manually reset it to its original value when the call is
complete.
To execute inferior calls, GDB changes registers such as the program counter, and then continues the execution
of the program. In a multithreaded program, all threads are continued, not just the thread instructed to make the
inferior call. If another thread reports an error or encounters a breakpoint, the evaluation of the inferior call is
abandoned.
Note that inferior function calls are a powerful GDB feature, but should be used with caution. For example, if the
program being debugged is stopped inside the function "printf", forcing a recursive call to printf via an inferior call
will very probably create problems. The Valgrind tool might also add another level of complexity to inferior calls,
e.g. by reporting tool errors during the Inferior call or due to the instrumentation done.
Connecting to or interrupting a Valgrind process blocked in a system call.
Connecting to or interrupting a Valgrind process blocked in a system call requires the "ptrace" system call to be
usable. This may be disabled in your kernel for security reasons.
When running your program, Valgrind’s scheduler periodically checks whether there is any work to be handled by
the gdbserver. Unfortunately this check is only done if at least one thread of the process is runnable. If all the
threads of the process are blocked in a system call, then the checks do not happen, and the Valgrind scheduler will
not invoke the gdbserver. In such a case, the vgdb relay application will "force" the gdbserver to be invoked, without
the intervention of the Valgrind scheduler.
Such forced invocation of the Valgrind gdbserver is implemented by vgdb using ptrace system calls. On a properly
implemented kernel, the ptrace calls done by vgdb will not influence the behaviour of the program running under
Valgrind. If however they do, giving the option --max-invoke-ms=0 to the vgdb relay application will disable
the usage of ptrace calls. The consequence of disabling ptrace usage in vgdb is that a Valgrind process blocked in
a system call cannot be woken up or interrupted from GDB until it executes enough basic blocks to let the Valgrind
scheduler’s normal checking take effect.
When ptrace is disabled in vgdb, you can increase the responsiveness of the Valgrind gdbserver to commands or
interrupts by giving a lower value to the option --vgdb-poll. If your application is blocked in system calls
most of the time, using a very low value for --vgdb-poll will cause a the gdbserver to be invoked sooner. The
gdbserver polling done by Valgrind’s scheduler is very efficient, so the increased polling frequency should not cause
significant performance degradation.
When ptrace is disabled in vgdb, a query packet sent by GDB may take significant time to be handled by the Valgrind
gdbserver. In such cases, GDB might encounter a protocol timeout. To avoid this, you can increase the value of
the timeout by using the GDB command "set remotetimeout".
Ubuntu versions 10.10 and later may restrict the scope of ptrace to the children of the process calling ptrace. As
the Valgrind process is not a child of vgdb, such restricted scoping causes the ptrace calls to fail. To avoid that,
Valgrind will automatically allow all processes belonging to the same userid to "ptrace" a Valgrind process, by using
PR_SET_PTRACER.
Unblocking processes blocked in system calls is not currently implemented on Mac OS X and Android. So you
cannot connect to or interrupt a process blocked in a system call on Mac OS X or Android.
Unblocking processes blocked in system calls is implemented via agent thread on Solaris. This is quite a different
approach than using ptrace on Linux, but leads to equivalent result - Valgrind gdbserver is invoked. Note that agent
thread is a Solaris OS feature and cannot be disabled.
56
Using and understanding the Valgrind core: Advanced Topics
Changing register values.
The Valgrind gdbserver will only modify the values of the thread’s registers when the thread is in status Runnable
or Yielding. In other states (typically, WaitSys), attempts to change register values will fail. Amongst other things,
this means that inferior calls are not executed for a thread which is in a system call, since the Valgrind gdbserver
does not implement system call restart.
Unsupported GDB functionality.
GDB provides a lot of debugging functionality and not all of it is supported. Specifically, the following are not
supported: reversible debugging and tracepoints.
Unknown limitations or problems.
The combination of GDB, Valgrind and the Valgrind gdbserver probably has unknown other limitations and
problems. If you encounter strange or unexpected behaviour, feel free to report a bug. But first please verify
that the limitation or problem is not inherent to GDB or the GDB remote protocol. You may be able to do so by
checking the behaviour when using standard gdbserver part of the GDB package.
3.2.9. vgdb command line options
Usage: vgdb [OPTION]... [[-c] COMMAND]...
vgdb ("Valgrind to GDB") is a small program that is used as an intermediary between Valgrind and GDB or a shell.
Therefore, it has two usage modes:
1. As a standalone utility, it is used from a shell command line to send monitor commands to a process running under
Valgrind. For this usage, the vgdb OPTION(s) must be followed by the monitor command to send. To send more
than one command, separate them with the -c option.
2. In combination with GDB "target remote |" command, it is used as the relay application between GDB and the
Valgrind gdbserver. For this usage, only OPTION(s) can be given, but no COMMAND can be given.
vgdb accepts the following options:
--pid=<number>
Specifies the PID of the process to which vgdb must connect to. This option is useful in case more than one Valgrind
gdbserver can be connected to. If the --pid argument is not given and multiple Valgrind gdbserver processes are
running, vgdb will report the list of such processes and then exit.
--vgdb-prefix
Must be given to both Valgrind and vgdb if you want to change the default prefix for the FIFOs (named pipes) used
for communication between the Valgrind gdbserver and vgdb.
--wait=<number>
Instructs vgdb to search for available Valgrind gdbservers for the specified number of seconds. This makes it possible
start a vgdb process before starting the Valgrind gdbserver with which you intend the vgdb to communicate. This
option is useful when used in conjunction with a --vgdb-prefix that is unique to the process you want to wait for.
Also, if you use the --wait argument in the GDB "target remote" command, you must set the GDB remotetimeout
to a value bigger than the --wait argument value. See option --max-invoke-ms (just below) for an example of
setting the remotetimeout value.
57
Using and understanding the Valgrind core: Advanced Topics
--max-invoke-ms=<number>
Gives the number of milliseconds after which vgdb will force the invocation of gdbserver embedded in Valgrind. The
default value is 100 milliseconds. A value of 0 disables forced invocation. The forced invocation is used when vgdb is
connected to a Valgrind gdbserver, and the Valgrind process has all its threads blocked in a system call.
If you specify a large value, you might need to increase the GDB "remotetimeout" value from its default value of
2 seconds. You should ensure that the timeout (in seconds) is bigger than the --max-invoke-ms value. For
example, for --max-invoke-ms=5000, the following GDB command is suitable:
(gdb) set remotetimeout 6
--cmd-time-out=<number>
Instructs a standalone vgdb to exit if the Valgrind gdbserver it is connected to does not process a command in the
specified number of seconds. The default value is to never time out.
--port=<portnr>
Instructs vgdb to use tcp/ip and listen for GDB on the specified port nr rather than to use a pipe to communicate
with GDB. Using tcp/ip allows to have GDB running on one computer and debugging a Valgrind process running on
another target computer. Example:
# On the target computer, start your program under valgrind using
valgrind --vgdb-error=0 prog
# and then in another shell, run:
vgdb --port=1234
On the computer which hosts GDB, execute the command:
gdb prog
(gdb) target remote targetip:1234
where targetip is the ip address or hostname of the target computer.
-c
To give more than one command to a standalone vgdb, separate the commands by an option -c. Example:
vgdb v.set log_output -c leak_check any
-l
Instructs a standalone vgdb to report the list of the Valgrind gdbserver processes running and then exit.
58
Using and understanding the Valgrind core: Advanced Topics
-D
Instructs a standalone vgdb to show the state of the shared memory used by the Valgrind gdbserver. vgdb will exit
after having shown the Valgrind gdbserver shared memory state.
-d
Instructs vgdb to produce debugging output. Give multiple -d args to increase the verbosity. When giving -d to a
relay vgdb, you better redirect the standard error (stderr) of vgdb to a file to avoid interaction between GDB and vgdb
debugging output.
3.2.10. Valgrind monitor commands
This section describes the Valgrind monitor commands, available regardless of the Valgrind tool selected. For the
tool specific commands, refer to Memcheck Monitor Commands,Helgrind Monitor Commands,Callgrind Monitor
Commands and Massif Monitor Commands.
The monitor commands can be sent either from a shell command line, by using a standalone vgdb, or from GDB,
by using GDB’s "monitor" command (see Monitor command handling by the Valgrind gdbserver). They can also be
launched by the client program, using the VALGRIND_MONITOR_COMMAND client request.
help [debug] instructs Valgrind’s gdbserver to give the list of all monitor commands of the Valgrind core and
of the tool. The optional "debug" argument tells to also give help for the monitor commands aimed at Valgrind
internals debugging.
v.info all_errors shows all errors found so far.
v.info last_error shows the last error found.
v.info location <addr> outputs information about the location <addr>. Possibly, the following are
described: global variables, local (stack) variables, allocated or freed blocks, ... The information produced depends
on the tool and on the options given to valgrind. Some tools (e.g. memcheck and helgrind) produce more detailed
information for client heap blocks. For example, these tools show the stacktrace where the heap block was allocated.
If a tool does not replace the malloc/free/... functions, then client heap blocks will not be described. Use the option
--read-var-info=yes to obtain more detailed information about global or local (stack) variables.
(gdb) monitor v.info location 0x8050b20
Location 0x8050b20 is 0 bytes inside global var "mx"
declared at tc19_shadowmem.c:19
(gdb) mo v.in loc 0x582f33c
Location 0x582f33c is 0 bytes inside local var "info"
declared at tc19_shadowmem.c:282, in frame #1 of thread 3
(gdb)
v.info n_errs_found [msg] shows the number of errors found so far, the nr of errors shown so far and the
current value of the --vgdb-error argument. The optional msg (one or more words) is appended. Typically,
this can be used to insert markers in a process output file between several tests executed in sequence by a process
started only once. This allows to associate the errors reported by Valgrind with the specific test that produced these
errors.
59
Using and understanding the Valgrind core: Advanced Topics
v.info open_fds shows the list of open file descriptors and details related to the file descriptor. This only
works if --track-fds=yes was given at Valgrind startup.
v.set {gdb_output | log_output | mixed_output} allows redirection of the Valgrind output (e.g.
the errors detected by the tool). The default setting is mixed_output.
With mixed_output, the Valgrind output goes to the Valgrind log (typically stderr) while the output of the
interactive GDB monitor commands (e.g. v.info last_error) is displayed by GDB.
With gdb_output, both the Valgrind output and the interactive GDB monitor commands output are displayed by
GDB.
With log_output, both the Valgrind output and the interactive GDB monitor commands output go to the Valgrind
log.
v.wait [ms (default 0)] instructs Valgrind gdbserver to sleep "ms" milli-seconds and then continue.
When sent from a standalone vgdb, if this is the last command, the Valgrind process will continue the execution of
the guest process. The typical usage of this is to use vgdb to send a "no-op" command to a Valgrind gdbserver so as
to continue the execution of the guest process.
v.kill requests the gdbserver to kill the process. This can be used from a standalone vgdb to properly kill a
Valgrind process which is currently expecting a vgdb connection.
v.set vgdb-error <errornr> dynamically changes the value of the --vgdb-error argument. A typical
usage of this is to start with --vgdb-error=0 on the command line, then set a few breakpoints, set the vgdb-error
value to a huge value and continue execution.
xtmemory [<filename> default xtmemory.kcg.%p.%n] requests the tool (Memcheck, Massif, Hel-
grind) to produce an xtree heap memory report. See Execution Trees for a detailed explanation about execution
trees.
The following Valgrind monitor commands are useful for investigating the behaviour of Valgrind or its gdbserver in
case of problems or bugs.
v.do expensive_sanity_check_general executes various sanity checks. In particular, the sanity of the
Valgrind heap is verified. This can be useful if you suspect that your program and/or Valgrind has a bug corrupting
Valgrind data structure. It can also be used when a Valgrind tool reports a client error to the connected GDB, in
order to verify the sanity of Valgrind before continuing the execution.
v.info gdbserver_status shows the gdbserver status. In case of problems (e.g. of communications),
this shows the values of some relevant Valgrind gdbserver internal variables. Note that the variables related to
breakpoints and watchpoints (e.g. the number of breakpoint addresses and the number of watchpoints) will be
zero, as GDB by default removes all watchpoints and breakpoints when execution stops, and re-inserts them when
resuming the execution of the debugged process. You can change this GDB behaviour by using the GDB command
set breakpoint always-inserted on.
v.info memory [aspacemgr] shows the statistics of Valgrind’s internal heap management. If option
--profile-heap=yes was given, detailed statistics will be output. With the optional argument aspacemgr.
the segment list maintained by valgrind address space manager will be output. Note that this list of segments is
always output on the Valgrind log.
60
Using and understanding the Valgrind core: Advanced Topics
v.info exectxt shows information about the "executable contexts" (i.e. the stack traces) recorded by Valgrind.
For some programs, Valgrind can record a very high number of such stack traces, causing a high memory usage.
This monitor command shows all the recorded stack traces, followed by some statistics. This can be used to analyse
the reason for having a big number of stack traces. Typically, you will use this command if v.info memory has
shown significant memory usage by the "exectxt" arena.
v.info scheduler shows various information about threads. First, it outputs the host stack trace, i.e. the
Valgrind code being executed. Then, for each thread, it outputs the thread state. For non terminated threads, the
state is followed by the guest (client) stack trace. Finally, for each active thread or for each terminated thread slot
not yet re-used, it shows the max usage of the valgrind stack.
Showing the client stack traces allows to compare the stack traces produced by the Valgrind unwinder with the stack
traces produced by GDB+Valgrind gdbserver. Pay attention that GDB and Valgrind scheduler status have their
own thread numbering scheme. To make the link between the GDB thread number and the corresponding Valgrind
scheduler thread number, use the GDB command info threads. The output of this command shows the GDB
thread number and the valgrind ’tid’. The ’tid’ is the thread number output by v.info scheduler. When
using the callgrind tool, the callgrind monitor command status outputs internal callgrind information about the
stack/call graph it maintains.
v.info stats shows various valgrind core and tool statistics. With this, Valgrind and tool statistics can be
examined while running, even without option --stats=yes.
v.info unwind <addr> [<len>] shows the CFI unwind debug info for the address range [addr, addr+len-
1]. The default value of <len> is 1, giving the unwind information for the instruction at <addr>.
v.set debuglog <intvalue> sets the Valgrind debug log level to <intvalue>. This allows to dynamically
change the log level of Valgrind e.g. when a problem is detected.
v.set hostvisibility [yes*|no] The value "yes" indicates to gdbserver that GDB can look at the
Valgrind ’host’ (internal) status/memory. "no" disables this access. When hostvisibility is activated, GDB can
e.g. look at Valgrind global variables. As an example, to examine a Valgrind global variable of the memcheck tool
on an x86, do the following setup:
(gdb) monitor v.set hostvisibility yes
(gdb) add-symbol-file /path/to/tool/executable/file/memcheck-x86-linux 0x58000000
add symbol table from file "/path/to/tool/executable/file/memcheck-x86-linux" at
.text_addr = 0x58000000
(y or n) y
Reading symbols from /path/to/tool/executable/file/memcheck-x86-linux...done.
(gdb)
After that, variables defined in memcheck-x86-linux can be accessed, e.g.
(gdb) p /x vgPlain_threads[1].os_state
$3 = {lwpid = 0x4688, threadgroup = 0x4688, parent = 0x0,
valgrind_stack_base = 0x62e78000, valgrind_stack_init_SP = 0x62f79fe0,
exitcode = 0x0, fatalsig = 0x0}
(gdb) p vex_control
$5 = {iropt_verbosity = 0, iropt_level = 2,
iropt_register_updates = VexRegUpdUnwindregsAtMemAccess,
iropt_unroll_thresh = 120, guest_max_insns = 60, guest_chase_thresh = 10,
61
Using and understanding the Valgrind core: Advanced Topics
guest_chase_cond = 0 ’\000’}
(gdb)
v.translate <address> [<traceflags>] shows the translation of the block containing address with
the given trace flags. The traceflags value bit patterns have similar meaning to Valgrind’s --trace-flags
option. It can be given in hexadecimal (e.g. 0x20) or decimal (e.g. 32) or in binary 1s and 0s bit (e.g. 0b00100000).
The default value of the traceflags is 0b00100000, corresponding to "show after instrumentation". The output of
this command always goes to the Valgrind log.
The additional bit flag 0b100000000 (bit 8) has no equivalent in the --trace-flags option. It enables tracing of
the gdbserver specific instrumentation. Note that this bit 8 can only enable the addition of gdbserver instrumentation
in the trace. Setting it to 0 will not disable the tracing of the gdbserver instrumentation if it is active for some other
reason, for example because there is a breakpoint at this address or because gdbserver is in single stepping mode.
3.3. Function wrapping
Valgrind allows calls to some specified functions to be intercepted and rerouted to a different, user-supplied function.
This can do whatever it likes, typically examining the arguments, calling onwards to the original, and possibly
examining the result. Any number of functions may be wrapped.
Function wrapping is useful for instrumenting an API in some way. For example, Helgrind wraps functions in
the POSIX pthreads API so it can know about thread status changes, and the core is able to wrap functions in the
MPI (message-passing) API so it can know of memory status changes associated with message arrival/departure.
Such information is usually passed to Valgrind by using client requests in the wrapper functions, although the exact
mechanism may vary.
3.3.1. A Simple Example
Supposing we want to wrap some function
int foo ( int x, int y ) { return x + y; }
A wrapper is a function of identical type, but with a special name which identifies it as the wrapper for foo. Wrappers
need to include supporting macros from valgrind.h. Here is a simple wrapper which prints the arguments and
return value:
#include <stdio.h>
#include "valgrind.h"
int I_WRAP_SONAME_FNNAME_ZU(NONE,foo)( int x, int y )
{
int result;
OrigFn fn;
VALGRIND_GET_ORIG_FN(fn);
printf("foo’s wrapper: args %d %d\n", x, y);
CALL_FN_W_WW(result, fn, x,y);
printf("foo’s wrapper: result %d\n", result);
return result;
}
62
Using and understanding the Valgrind core: Advanced Topics
To become active, the wrapper merely needs to be present in a text section somewhere in the same process’ address
space as the function it wraps, and for its ELF symbol name to be visible to Valgrind. In practice, this means either
compiling to a .o and linking it in, or compiling to a .so and LD_PRELOADing it in. The latter is more convenient
in that it doesn’t require relinking.
All wrappers have approximately the above form. There are three crucial macros:
I_WRAP_SONAME_FNNAME_ZU: this generates the real name of the wrapper. This is an encoded name which
Valgrind notices when reading symbol table information. What it says is: I am the wrapper for any function named
foo which is found in an ELF shared object with an empty ("NONE") soname field. The specification mechanism is
powerful in that wildcards are allowed for both sonames and function names. The details are discussed below.
VALGRIND_GET_ORIG_FN: once in the wrapper, the first priority is to get hold of the address of the original (and
any other supporting information needed). This is stored in a value of opaque type OrigFn. The information is
acquired using VALGRIND_GET_ORIG_FN. It is crucial to make this macro call before calling any other wrapped
function in the same thread.
CALL_FN_W_WW: eventually we will want to call the function being wrapped. Calling it directly does not work, since
that just gets us back to the wrapper and leads to an infinite loop. Instead, the result lvalue, OrigFn and arguments
are handed to one of a family of macros of the form CALL_FN_*. These cause Valgrind to call the original and avoid
recursion back to the wrapper.
3.3.2. Wrapping Specifications
This scheme has the advantage of being self-contained. A library of wrappers can be compiled to object code in the
normal way, and does not rely on an external script telling Valgrind which wrappers pertain to which originals.
Each wrapper has a name which, in the most general case says: I am the wrapper for any function whose name matches
FNPATT and whose ELF "soname" matches SOPATT. Both FNPATT and SOPATT may contain wildcards (asterisks)
and other characters (spaces, dots, @, etc) which are not generally regarded as valid C identifier names.
This flexibility is needed to write robust wrappers for POSIX pthread functions, where typically we are not completely
sure of either the function name or the soname, or alternatively we want to wrap a whole set of functions at once.
For example, pthread_create in GNU libpthread is usually a versioned symbol - one whose name ends in, eg,
@GLIBC_2.3. Hence we are not sure what its real name is. We also want to cover any soname of the form
libpthread.so*. So the header of the wrapper will be
int I_WRAP_SONAME_FNNAME_ZZ(libpthreadZdsoZd0,pthreadZucreateZAZa)
( ... formals ... )
{ ... body ... }
In order to write unusual characters as valid C function names, a Z-encoding scheme is used. Names are written
literally, except that a capital Z acts as an escape character, with the following encoding:
63
Using and understanding the Valgrind core: Advanced Topics
Za encodes *
Zp +
Zc :
Zd .
Zu _
Zh -
Zs (space)
ZA @
ZZ Z
ZL ( # only in valgrind 3.3.0 and later
ZR ) # only in valgrind 3.3.0 and later
Hence libpthreadZdsoZd0 is an encoding of the soname libpthread.so.0 and pthreadZucreateZAZa
is an encoding of the function name pthread_create@*.
The macro I_WRAP_SONAME_FNNAME_ZZ constructs a wrapper name in which both the soname (first component)
and function name (second component) are Z-encoded. Encoding the function name can be tiresome and is often
unnecessary, so a second macro, I_WRAP_SONAME_FNNAME_ZU, can be used instead. The _ZU variant is also
useful for writing wrappers for C++ functions, in which the function name is usually already mangled using some
other convention in which Z plays an important role. Having to encode a second time quickly becomes confusing.
Since the function name field may contain wildcards, it can be anything, including just *. The same is true for
the soname. However, some ELF objects - specifically, main executables - do not have sonames. Any object
lacking a soname is treated as if its soname was NONE, which is why the original example above had a name
I_WRAP_SONAME_FNNAME_ZU(NONE,foo).
Note that the soname of an ELF object is not the same as its file name, although it is often similar. You can find the
soname of an object libfoo.so using the command readelf -a libfoo.so | grep soname.
3.3.3. Wrapping Semantics
The ability for a wrapper to replace an infinite family of functions is powerful but brings complications in situations
where ELF objects appear and disappear (are dlopen’d and dlclose’d) on the fly. Valgrind tries to maintain sensible
behaviour in such situations.
For example, suppose a process has dlopened (an ELF object with soname) object1.so, which contains
function1. It starts to use function1 immediately.
After a while it dlopens wrappers.so, which contains a wrapper for function1 in (soname) object1.so. All
subsequent calls to function1 are rerouted to the wrapper.
If wrappers.so is later dlclose’d, calls to function1 are naturally routed back to the original.
Alternatively, if object1.so is dlclose’d but wrappers.so remains, then the wrapper exported by
wrappers.so becomes inactive, since there is no way to get to it - there is no original to call any more.
However, Valgrind remembers that the wrapper is still present. If object1.so is eventually dlopen’d again, the
wrapper will become active again.
In short, valgrind inspects all code loading/unloading events to ensure that the set of currently active wrappers remains
consistent.
64
Using and understanding the Valgrind core: Advanced Topics
A second possible problem is that of conflicting wrappers. It is easily possible to load two or more wrappers, both of
which claim to be wrappers for some third function. In such cases Valgrind will complain about conflicting wrappers
when the second one appears, and will honour only the first one.
3.3.4. Debugging
Figuring out what’s going on given the dynamic nature of wrapping can be difficult. The --trace-redir=yes
option makes this possible by showing the complete state of the redirection subsystem after every mmap/munmap
event affecting code (text).
There are two central concepts:
A "redirection specification" is a binding of a (soname pattern, fnname pattern) pair to a code address. These
bindings are created by writing functions with names made with the I_WRAP_SONAME_FNNAME_{ZZ,_ZU}
macros.
An "active redirection" is a code-address to code-address binding currently in effect.
The state of the wrapping-and-redirection subsystem comprises a set of specifications and a set of active bindings.
The specifications are acquired/discarded by watching all mmap/munmap events on code (text) sections. The active
binding set is (conceptually) recomputed from the specifications, and all known symbol names, following any change
to the specification set.
--trace-redir=yes shows the contents of both sets following any such event.
-v prints a line of text each time an active specification is used for the first time.
Hence for maximum debugging effectiveness you will need to use both options.
One final comment. The function-wrapping facility is closely tied to Valgrind’s ability to replace (redirect) specified
functions, for example to redirect calls to malloc to its own implementation. Indeed, a replacement function can be
regarded as a wrapper function which does not call the original. However, to make the implementation more robust,
the two kinds of interception (wrapping vs replacement) are treated differently.
--trace-redir=yes shows specifications and bindings for both replacement and wrapper functions. To
differentiate the two, replacement bindings are printed using R-> whereas wraps are printed using W->.
3.3.5. Limitations - control flow
For the most part, the function wrapping implementation is robust. The only important caveat is: in a wrapper, get hold
of the OrigFn information using VALGRIND_GET_ORIG_FN before calling any other wrapped function. Once you
have the OrigFn, arbitrary calls between, recursion between, and longjumps out of wrappers should work correctly.
There is never any interaction between wrapped functions and merely replaced functions (eg malloc), so you can
call malloc etc safely from within wrappers.
The above comments are true for {x86,amd64,ppc32,arm,mips32,s390}-linux. On ppc64-linux function wrapping is
more fragile due to the (arguably poorly designed) ppc64-linux ABI. This mandates the use of a shadow stack which
tracks entries/exits of both wrapper and replacement functions. This gives two limitations: firstly, longjumping out
of wrappers will rapidly lead to disaster, since the shadow stack will not get correctly cleared. Secondly, since the
shadow stack has finite size, recursion between wrapper/replacement functions is only possible to a limited depth,
beyond which Valgrind has to abort the run. This depth is currently 16 calls.
For all platforms ({x86,amd64,ppc32,ppc64,arm,mips32,s390}-linux) all the above comments apply on a per-thread
basis. In other words, wrapping is thread-safe: each thread must individually observe the above restrictions, but there
is no need for any kind of inter-thread cooperation.
65
Using and understanding the Valgrind core: Advanced Topics
3.3.6. Limitations - original function signatures
As shown in the above example, to call the original you must use a macro of the form CALL_FN_*. For technical
reasons it is impossible to create a single macro to deal with all argument types and numbers, so a family of macros
covering the most common cases is supplied. In what follows, ’W’ denotes a machine-word-typed value (a pointer or
a C long), and ’v’ denotes C’s void type. The currently available macros are:
CALL_FN_v_v -- call an original of type void fn ( void )
CALL_FN_W_v -- call an original of type long fn ( void )
CALL_FN_v_W -- call an original of type void fn ( long )
CALL_FN_W_W -- call an original of type long fn ( long )
CALL_FN_v_WW -- call an original of type void fn ( long, long )
CALL_FN_W_WW -- call an original of type long fn ( long, long )
CALL_FN_v_WWW -- call an original of type void fn ( long, long, long )
CALL_FN_W_WWW -- call an original of type long fn ( long, long, long )
CALL_FN_W_WWWW -- call an original of type long fn ( long, long, long, long )
CALL_FN_W_5W -- call an original of type long fn ( long, long, long, long, long )
CALL_FN_W_6W -- call an original of type long fn ( long, long, long, long, long, long )
and so on, up to
CALL_FN_W_12W
The set of supported types can be expanded as needed. It is regrettable that this limitation exists. Function
wrapping has proven difficult to implement, with a certain apparently unavoidable level of ickiness. After several
implementation attempts, the present arrangement appears to be the least-worst tradeoff. At least it works reliably in
the presence of dynamic linking and dynamic code loading/unloading.
You should not attempt to wrap a function of one type signature with a wrapper of a different type signature.
Such trickery will surely lead to crashes or strange behaviour. This is not a limitation of the function wrapping
implementation, merely a reflection of the fact that it gives you sweeping powers to shoot yourself in the foot if you
are not careful. Imagine the instant havoc you could wreak by writing a wrapper which matched any function name
in any soname - in effect, one which claimed to be a wrapper for all functions in the process.
3.3.7. Examples
In the source tree, memcheck/tests/wrap[1-8].c provide a series of examples, ranging from very simple to
quite advanced.
mpi/libmpiwrap.c is an example of wrapping a big, complex API (the MPI-2 interface). This file defines almost
300 different wrappers.
66
4. Memcheck: a memory error detector
To use this tool, you may specify --tool=memcheck on the Valgrind command line. You don’t have to, though,
since Memcheck is the default tool.
4.1. Overview
Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs.
• Accessing memory you shouldn’t, e.g. overrunning and underrunning heap blocks, overrunning the top of the stack,
and accessing memory after it has been freed.
Using undefined values, i.e. values that have not been initialised, or that have been derived from other undefined
values.
Incorrect freeing of heap memory, such as double-freeing heap blocks, or mismatched use of malloc/new/new[]
versus free/delete/delete[]
• Overlapping src and dst pointers in memcpy and related functions.
Passing a fishy (presumably negative) value to the size parameter of a memory allocation function.
• Memory leaks.
Problems like these can be difficult to find by other means, often remaining undetected for long periods, then causing
occasional, difficult-to-diagnose crashes.
Memcheck also provides Execution Trees memory profiling using the command line option --xtree-memory and
the monitor command xtmemory.
4.2. Explanation of error messages from
Memcheck
Memcheck issues a range of error messages. This section presents a quick summary of what error messages mean.
The precise behaviour of the error-checking machinery is described in Details of Memcheck’s checking machinery.
4.2.1. Illegal read / Illegal write errors
For example:
Invalid read of size 4
at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9)
by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9)
by 0x40B07FF4: read_png_image(QImageIO *) (kernel/qpngio.cpp:326)
by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621)
Address 0xBFFFF0E0 is not stack’d, malloc’d or free’d
This happens when your program reads or writes memory at a place which Memcheck reckons it shouldn’t. In
this example, the program did a 4-byte read at address 0xBFFFF0E0, somewhere within the system-supplied library
67
Memcheck: a memory error detector
libpng.so.2.1.0.9, which was called from somewhere else in the same library, called from line 326 of qpngio.cpp,
and so on.
Memcheck tries to establish what the illegal address might relate to, since that’s often useful. So, if it points
into a block of memory which has already been freed, you’ll be informed of this, and also where the block was
freed. Likewise, if it should turn out to be just off the end of a heap block, a common result of off-by-one-
errors in array subscripting, you’ll be informed of this fact, and also where the block was allocated. If you use
the --read-var-info option Memcheck will run more slowly but may give a more detailed description of any
illegal address.
In this example, Memcheck can’t identify the address. Actually the address is on the stack, but, for some reason, this
is not a valid stack address -- it is below the stack pointer and that isn’t allowed. In this particular case it’s probably
caused by GCC generating invalid code, a known bug in some ancient versions of GCC.
Note that Memcheck only tells you that your program is about to access memory at an illegal address. It can’t stop the
access from happening. So, if your program makes an access which normally would result in a segmentation fault,
you program will still suffer the same fate -- but you will get a message from Memcheck immediately prior to this. In
this particular example, reading junk on the stack is non-fatal, and the program stays alive.
4.2.2. Use of uninitialised values
For example:
Conditional jump or move depends on uninitialised value(s)
at 0x402DFA94: _IO_vfprintf (_itoa.h:49)
by 0x402E8476: _IO_printf (printf.c:36)
by 0x8048472: main (tests/manuel1.c:8)
An uninitialised-value use error is reported when your program uses a value which hasn’t been initialised -- in other
words, is undefined. Here, the undefined value is used somewhere inside the printf machinery of the C library.
This error was reported when running the following small program:
int main()
{
int x;
printf ("x = %d\n", x);
}
It is important to understand that your program can copy around junk (uninitialised) data as much as it likes.
Memcheck observes this and keeps track of the data, but does not complain. A complaint is issued only when
your program attempts to make use of uninitialised data in a way that might affect your program’s externally-visible
behaviour. In this example, xis uninitialised. Memcheck observes the value being passed to _IO_printf and
thence to _IO_vfprintf, but makes no comment. However, _IO_vfprintf has to examine the value of xso it
can turn it into the corresponding ASCII string, and it is at this point that Memcheck complains.
Sources of uninitialised data tend to be:
Local variables in procedures which have not been initialised, as in the example above.
• The contents of heap blocks (allocated with malloc,new, or a similar function) before you (or a constructor) write
something there.
68
Memcheck: a memory error detector
To see information on the sources of uninitialised data in your program, use the --track-origins=yes option.
This makes Memcheck run more slowly, but can make it much easier to track down the root causes of uninitialised
value errors.
4.2.3. Use of uninitialised or unaddressable values in
system calls
Memcheck checks all parameters to system calls:
It checks all the direct parameters themselves, whether they are initialised.
• Also, if a system call needs to read from a buffer provided by your program, Memcheck checks that the entire buffer
is addressable and its contents are initialised.
Also, if the system call needs to write to a user-supplied buffer, Memcheck checks that the buffer is addressable.
After the system call, Memcheck updates its tracked information to precisely reflect any changes in memory state
caused by the system call.
Here’s an example of two system calls with invalid parameters:
#include <stdlib.h>
#include <unistd.h>
int main( void )
{
char*arr = malloc(10);
int*arr2 = malloc(sizeof(int));
write( 1 /*stdout */, arr, 10 );
exit(arr2[0]);
}
You get these complaints ...
Syscall param write(buf) points to uninitialised byte(s)
at 0x25A48723: __write_nocancel (in /lib/tls/libc-2.3.3.so)
by 0x259AFAD3: __libc_start_main (in /lib/tls/libc-2.3.3.so)
by 0x8048348: (within /auto/homes/njn25/grind/head4/a.out)
Address 0x25AB8028 is 0 bytes inside a block of size 10 alloc’d
at 0x259852B0: malloc (vg_replace_malloc.c:130)
by 0x80483F1: main (a.c:5)
Syscall param exit(error_code) contains uninitialised byte(s)
at 0x25A21B44: __GI__exit (in /lib/tls/libc-2.3.3.so)
by 0x8048426: main (a.c:8)
... because the program has (a) written uninitialised junk from the heap block to the standard output, and (b) passed an
uninitialised value to exit. Note that the first error refers to the memory pointed to by buf (not buf itself), but the
second error refers directly to exits argument arr2[0].
69
Memcheck: a memory error detector
4.2.4. Illegal frees
For example:
Invalid free()
at 0x4004FFDF: free (vg_clientmalloc.c:577)
by 0x80484C7: main (tests/doublefree.c:10)
Address 0x3807F7B4 is 0 bytes inside a block of size 177 free’d
at 0x4004FFDF: free (vg_clientmalloc.c:577)
by 0x80484C7: main (tests/doublefree.c:10)
Memcheck keeps track of the blocks allocated by your program with malloc/new, so it can know exactly whether
or not the argument to free/delete is legitimate or not. Here, this test program has freed the same block twice.
As with the illegal read/write errors, Memcheck attempts to make sense of the address freed. If, as here, the address
is one which has previously been freed, you wil be told that -- making duplicate frees of the same block easy to spot.
You will also get this message if you try to free a pointer that doesn’t point to the start of a heap block.
4.2.5. When a heap block is freed with an inappropriate
deallocation function
In the following example, a block allocated with new[] has wrongly been deallocated with free:
Mismatched free() / delete / delete []
at 0x40043249: free (vg_clientfuncs.c:171)
by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)
by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)
by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)
Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc’d
at 0x4004318C: operator new[](unsigned int) (vg_clientfuncs.c:152)
by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)
by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)
by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)
In C++ it’s important to deallocate memory in a way compatible with how it was allocated. The deal is:
If allocated with malloc,calloc,realloc,valloc or memalign, you must deallocate with free.
If allocated with new, you must deallocate with delete.
If allocated with new[], you must deallocate with delete[].
70
Memcheck: a memory error detector
The worst thing is that on Linux apparently it doesn’t matter if you do mix these up, but the same program may then
crash on a different platform, Solaris for example. So it’s best to fix it properly. According to the KDE folks "it’s
amazing how many C++ programmers don’t know this".
The reason behind the requirement is as follows. In some C++ implementations, delete[] must be used for objects
allocated by new[] because the compiler stores the size of the array and the pointer-to-member to the destructor of
the array’s content just before the pointer actually returned. delete doesn’t account for this and will get confused,
possibly corrupting the heap.
4.2.6. Overlapping source and destination blocks
The following C library functions copy some data from one memory block to another (or something similar): memcpy,
strcpy,strncpy,strcat,strncat. The blocks pointed to by their src and dst pointers aren’t allowed to
overlap. The POSIX standards have wording along the lines "If copying takes place between objects that overlap, the
behavior is undefined." Therefore, Memcheck checks for this.
For example:
==27492== Source and destination overlap in memcpy(0xbffff294, 0xbffff280, 21)
==27492== at 0x40026CDC: memcpy (mc_replace_strmem.c:71)
==27492== by 0x804865A: main (overlap.c:40)
You don’t want the two blocks to overlap because one of them could get partially overwritten by the copying.
You might think that Memcheck is being overly pedantic reporting this in the case where dst is less than src.
For example, the obvious way to implement memcpy is by copying from the first byte to the last. However, the
optimisation guides of some architectures recommend copying from the last byte down to the first. Also, some
implementations of memcpy zero dst before copying, because zeroing the destination’s cache line(s) can improve
performance.
The moral of the story is: if you want to write truly portable code, don’t make any assumptions about the language
implementation.
4.2.7. Fishy argument values
All memory allocation functions take an argument specifying the size of the memory block that should be allocated.
Clearly, the requested size should be a non-negative value and is typically not excessively large. For instance, it is
extremely unlikly that the size of an allocation request exceeds 2**63 bytes on a 64-bit machine. It is much more
likely that such a value is the result of an erroneous size calculation and is in effect a negative value (that just happens
to appear excessively large because the bit pattern is interpreted as an unsigned integer). Such a value is called a "fishy
value". The size argument of the following allocation functions is checked for being fishy: malloc,calloc,
realloc,memalign,new,new [].__builtin_new,__builtin_vec_new, For calloc both arguments
are being checked.
For example:
71
Memcheck: a memory error detector
==32233== Argument ’size’ of function malloc has a fishy (possibly negative) value: -3
==32233== at 0x4C2CFA7: malloc (vg_replace_malloc.c:298)
==32233== by 0x400555: foo (fishy.c:15)
==32233== by 0x400583: main (fishy.c:23)
In earlier Valgrind versions those values were being referred to as "silly arguments" and no back-trace was included.
4.2.8. Memory leak detection
Memcheck keeps track of all heap blocks issued in response to calls to malloc/new et al. So when the program exits,
it knows which blocks have not been freed.
If --leak-check is set appropriately, for each remaining block, Memcheck determines if the block is reachable
from pointers within the root-set. The root-set consists of (a) general purpose registers of all threads, and (b)
initialised, aligned, pointer-sized data words in accessible client memory, including stacks.
There are two ways a block can be reached. The first is with a "start-pointer", i.e. a pointer to the start of the block.
The second is with an "interior-pointer", i.e. a pointer to the middle of the block. There are several ways we know of
that an interior-pointer can occur:
The pointer might have originally been a start-pointer and have been moved along deliberately (or not deliberately)
by the program. In particular, this can happen if your program uses tagged pointers, i.e. if it uses the bottom one,
two or three bits of a pointer, which are normally always zero due to alignment, in order to store extra information.
It might be a random junk value in memory, entirely unrelated, just a coincidence.
• It might be a pointer to the inner char array of a C++ std::string. For example, some compilers add 3 words at
the beginning of the std::string to store the length, the capacity and a reference count before the memory containing
the array of characters. They return a pointer just after these 3 words, pointing at the char array.
Some code might allocate a block of memory, and use the first 8 bytes to store (block size - 8) as a 64bit number.
sqlite3MemMalloc does this.
It might be a pointer to an array of C++ objects (which possess destructors) allocated with new[]. In this case,
some compilers store a "magic cookie" containing the array length at the start of the allocated block, and return a
pointer to just past that magic cookie, i.e. an interior-pointer. See this page for more information.
It might be a pointer to an inner part of a C++ object using multiple inheritance.
72
Memcheck: a memory error detector
You can optionally activate heuristics to use during the leak search to detect the interior pointers corresponding to the
stdstring,length64,newarray and multipleinheritance cases. If the heuristic detects that an interior
pointer corresponds to such a case, the block will be considered as reachable by the interior pointer. In other words,
the interior pointer will be treated as if it were a start pointer.
With that in mind, consider the nine possible cases described by the following figure.
Pointer chain AAA Leak Case BBB Leak Case
------------- ------------- -------------
(1) RRR ------------> BBB DR
(2) RRR ---> AAA ---> BBB DR IR
(3) RRR BBB DL
(4) RRR AAA ---> BBB DL IL
(5) RRR ------?-----> BBB (y)DR, (n)DL
(6) RRR ---> AAA -?-> BBB DR (y)IR, (n)DL
(7) RRR -?-> AAA ---> BBB (y)DR, (n)DL (y)IR, (n)IL
(8) RRR -?-> AAA -?-> BBB (y)DR, (n)DL (y,y)IR, (n,y)IL, (_,n)DL
(9) RRR AAA -?-> BBB DL (y)IL, (n)DL
Pointer chain legend:
- RRR: a root set node or DR block
- AAA, BBB: heap blocks
- --->: a start-pointer
- -?->: an interior-pointer
Leak Case legend:
- DR: Directly reachable
- IR: Indirectly reachable
- DL: Directly lost
- IL: Indirectly lost
- (y)XY: it’s XY if the interior-pointer is a real pointer
- (n)XY: it’s XY if the interior-pointer is not a real pointer
- (_)XY: it’s XY in either case
Every possible case can be reduced to one of the above nine. Memcheck merges some of these cases in its output,
resulting in the following four leak kinds.
"Still reachable". This covers cases 1 and 2 (for the BBB blocks) above. A start-pointer or chain of start-pointers
to the block is found. Since the block is still pointed at, the programmer could, at least in principle, have freed
it before program exit. "Still reachable" blocks are very common and arguably not a problem. So, by default,
Memcheck won’t report such blocks individually.
"Definitely lost". This covers case 3 (for the BBB blocks) above. This means that no pointer to the block can be
found. The block is classified as "lost", because the programmer could not possibly have freed it at program exit,
since no pointer to it exists. This is likely a symptom of having lost the pointer at some earlier point in the program.
Such cases should be fixed by the programmer.
73
Memcheck: a memory error detector
"Indirectly lost". This covers cases 4 and 9 (for the BBB blocks) above. This means that the block is lost, not
because there are no pointers to it, but rather because all the blocks that point to it are themselves lost. For example,
if you have a binary tree and the root node is lost, all its children nodes will be indirectly lost. Because the problem
will disappear if the definitely lost block that caused the indirect leak is fixed, Memcheck won’t report such blocks
individually by default.
"Possibly lost". This covers cases 5--8 (for the BBB blocks) above. This means that a chain of one or more
pointers to the block has been found, but at least one of the pointers is an interior-pointer. This could just be a
random value in memory that happens to point into a block, and so you shouldn’t consider this ok unless you know
you have interior-pointers.
(Note: This mapping of the nine possible cases onto four leak kinds is not necessarily the best way that leaks could be
reported; in particular, interior-pointers are treated inconsistently. It is possible the categorisation may be improved
in the future.)
Furthermore, if suppressions exists for a block, it will be reported as "suppressed" no matter what which of the above
four kinds it belongs to.
The following is an example leak summary.
LEAK SUMMARY:
definitely lost: 48 bytes in 3 blocks.
indirectly lost: 32 bytes in 2 blocks.
possibly lost: 96 bytes in 6 blocks.
still reachable: 64 bytes in 4 blocks.
suppressed: 0 bytes in 0 blocks.
If heuristics have been used to consider some blocks as reachable, the leak summary details the heuristically reachable
subset of ’still reachable:’ per heuristic. In the below example, of the 95 bytes still reachable, 87 bytes (56+7+8+16)
have been considered heuristically reachable.
LEAK SUMMARY:
definitely lost: 4 bytes in 1 blocks
indirectly lost: 0 bytes in 0 blocks
possibly lost: 0 bytes in 0 blocks
still reachable: 95 bytes in 6 blocks
of which reachable via heuristic:
stdstring : 56 bytes in 2 blocks
length64 : 16 bytes in 1 blocks
newarray : 7 bytes in 1 blocks
multipleinheritance: 8 bytes in 1 blocks
suppressed: 0 bytes in 0 blocks
If --leak-check=full is specified, Memcheck will give details for each definitely lost or possibly lost block,
including where it was allocated. (Actually, it merges results for all blocks that have the same leak kind and
sufficiently similar stack traces into a single "loss record". The --leak-resolution lets you control the meaning
of "sufficiently similar".) It cannot tell you when or how or why the pointer to a leaked block was lost; you have to
work that out for yourself. In general, you should attempt to ensure your programs do not have any definitely lost or
possibly lost blocks at exit.
For example:
74
Memcheck: a memory error detector
8 bytes in 1 blocks are definitely lost in loss record 1 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
by 0x........: main (leak-tree.c:39)
88 (8 direct, 80 indirect) bytes in 1 blocks are definitely lost in loss record 13 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
by 0x........: main (leak-tree.c:25)
The first message describes a simple case of a single 8 byte block that has been definitely lost. The second case
mentions another 8 byte block that has been definitely lost; the difference is that a further 80 bytes in other blocks are
indirectly lost because of this lost block. The loss records are not presented in any notable order, so the loss record
numbers aren’t particularly meaningful. The loss record numbers can be used in the Valgrind gdbserver to list the
addresses of the leaked blocks and/or give more details about how a block is still reachable.
The option --show-leak-kinds=<set> controls the set of leak kinds to show when --leak-check=full
is specified.
The <set> of leak kinds is specified in one of the following ways:
a comma separated list of one or more of definite indirect possible reachable.
all to specify the complete set (all leak kinds).
none for the empty set.
The default value for the leak kinds to show is --show-leak-kinds=definite,possible.
To also show the reachable and indirectly lost blocks in addition to the definitely and possibly lost blocks,
you can use --show-leak-kinds=all. To only show the reachable and indirectly lost blocks, use
--show-leak-kinds=indirect,reachable. The reachable and indirectly lost blocks will then be pre-
sented as shown in the following two examples.
64 bytes in 4 blocks are still reachable in loss record 2 of 4
at 0x........: malloc (vg_replace_malloc.c:177)
by 0x........: mk (leak-cases.c:52)
by 0x........: main (leak-cases.c:74)
32 bytes in 2 blocks are indirectly lost in loss record 1 of 4
at 0x........: malloc (vg_replace_malloc.c:177)
by 0x........: mk (leak-cases.c:52)
by 0x........: main (leak-cases.c:80)
Because there are different kinds of leaks with different severities, an interesting question is: which leaks should be
counted as true "errors" and which should not?
75
Memcheck: a memory error detector
The answer to this question affects the numbers printed in the ERROR SUMMARY line, and also the effect of the
--error-exitcode option. First, a leak is only counted as a true "error" if --leak-check=full is specified.
Then, the option --errors-for-leak-kinds=<set> controls the set of leak kinds to consider as errors. The
default value is --errors-for-leak-kinds=definite,possible
4.3. Memcheck Command-Line Options
--leak-check=<no|summary|yes|full> [default: summary]
When enabled, search for memory leaks when the client program finishes. If set to summary, it says how many leaks
occurred. If set to full or yes, each individual leak will be shown in detail and/or counted as an error, as specified
by the options --show-leak-kinds and --errors-for-leak-kinds.
--leak-resolution=<low|med|high> [default: high]
When doing leak checking, determines how willing Memcheck is to consider different backtraces to be the same for
the purposes of merging multiple leaks into a single leak report. When set to low, only the first two entries need
match. When med, four entries have to match. When high, all entries need to match.
For hardcore leak debugging, you probably want to use --leak-resolution=high together with
--num-callers=40 or some such large number.
Note that the --leak-resolution setting does not affect Memcheck’s ability to find leaks. It only changes how
the results are presented.
--show-leak-kinds=<set> [default: definite,possible]
Specifies the leak kinds to show in a full leak search, in one of the following ways:
a comma separated list of one or more of definite indirect possible reachable.
all to specify the complete set (all leak kinds). It is equivalent to --show-leak-kinds=definite,indirect,possible,reachable.
none for the empty set.
--errors-for-leak-kinds=<set> [default: definite,possible]
Specifies the leak kinds to count as errors in a full leak search. The <set> is specified similarly to
--show-leak-kinds
--leak-check-heuristics=<set> [default: all]
Specifies the set of leak check heuristics to be used during leak searches. The heuristics control which interior pointers
to a block cause it to be considered as reachable. The heuristic set is specified in one of the following ways:
a comma separated list of one or more of stdstring length64 newarray multipleinheritance.
all to activate the complete set of heuristics. It is equivalent to --leak-check-heuristics=stdstring,length64,newarray,multipleinheritance.
none for the empty set.
76
Memcheck: a memory error detector
--show-reachable=<yes|no> ,--show-possibly-lost=<yes|no>
These options provide an alternative way to specify the leak kinds to show:
--show-reachable=no --show-possibly-lost=yes is equivalent to --show-leak-kinds=definite,possible.
--show-reachable=no --show-possibly-lost=no is equivalent to --show-leak-kinds=definite.
--show-reachable=yes is equivalent to --show-leak-kinds=all.
--xtree-leak=<no|yes> [no]
If set to yes, the results for the leak search done at exit will be output in a ’Callgrind Format’ execution tree file. Note
that this automatically sets the option --leak-check=full. The produced file will contain the following events:
RB : Reachable Bytes
PB : Possibly lost Bytes
IB : Indirectly lost Bytes
DB : Definitely lost Bytes (direct plus indirect)
DIB : Definitely Indirectly lost Bytes (subset of DB)
RBk : reachable Blocks
PBk : Possibly lost Blocks
IBk : Indirectly lost Blocks
DBk : Definitely lost Blocks
The increase or decrease for all events above will also be output in the file to provide the delta (increase or decrease)
between 2 successive leak searches. For example, iRB is the increase of the RB event, dPBk is the decrease of PBk
event. The values for the increase and decrease events will be zero for the first leak search done.
See Execution Trees for a detailed explanation about execution trees.
--xtree-leak-file=<filename> [default: xtleak.kcg.%p]
Specifies that Valgrind should produce the xtree leak report in the specified file. Any %p,%q or %n sequences
appearing in the filename are expanded in exactly the same way as they are for --log-file. See the description of
--log-file for details.
See Execution Trees for a detailed explanation about execution trees formats.
--undef-value-errors=<yes|no> [default: yes]
Controls whether Memcheck reports uses of undefined value errors. Set this to no if you don’t want to see undefined
value errors. It also has the side effect of speeding up Memcheck somewhat. AddrCheck (removed in Valgrind 3.1.0)
functioned like Memcheck with --undef-value-errors=no.
77
Memcheck: a memory error detector
--track-origins=<yes|no> [default: no]
Controls whether Memcheck tracks the origin of uninitialised values. By default, it does not, which means that
although it can tell you that an uninitialised value is being used in a dangerous way, it cannot tell you where the
uninitialised value came from. This often makes it difficult to track down the root problem.
When set to yes, Memcheck keeps track of the origins of all uninitialised values. Then, when an uninitialised value
error is reported, Memcheck will try to show the origin of the value. An origin can be one of the following four
places: a heap block, a stack allocation, a client request, or miscellaneous other sources (eg, a call to brk).
For uninitialised values originating from a heap block, Memcheck shows where the block was allocated. For
uninitialised values originating from a stack allocation, Memcheck can tell you which function allocated the value, but
no more than that -- typically it shows you the source location of the opening brace of the function. So you should
carefully check that all of the function’s local variables are initialised properly.
Performance overhead: origin tracking is expensive. It halves Memcheck’s speed and increases memory use by a
minimum of 100MB, and possibly more. Nevertheless it can drastically reduce the effort required to identify the root
cause of uninitialised value errors, and so is often a programmer productivity win, despite running more slowly.
Accuracy: Memcheck tracks origins quite accurately. To avoid very large space and time overheads, some
approximations are made. It is possible, although unlikely, that Memcheck will report an incorrect origin, or not
be able to identify any origin.
Note that the combination --track-origins=yes and --undef-value-errors=no is nonsensical. Mem-
check checks for and rejects this combination at startup.
--partial-loads-ok=<yes|no> [default: yes]
Controls how Memcheck handles 32-, 64-, 128- and 256-bit naturally aligned loads from addresses for which some
bytes are addressable and others are not. When yes, such loads do not produce an address error. Instead, loaded
bytes originating from illegal addresses are marked as uninitialised, and those corresponding to legal addresses are
handled in the normal way.
When no, loads from partially invalid addresses are treated the same as loads from completely invalid addresses: an
illegal-address error is issued, and the resulting bytes are marked as initialised.
Note that code that behaves in this way is in violation of the ISO C/C++ standards, and should be considered broken.
If at all possible, such code should be fixed.
--expensive-definedness-checks=<no|auto|yes> [default: auto]
Controls whether Memcheck should employ more precise but also more expensive (time consuming) instrumentation
when checking the definedness of certain values. In particular, this affects the instrumentation of integer adds,
subtracts and equality comparisons.
Selecting --expensive-definedness-checks=yes causes Memcheck to use the most accurate analysis
possible. This minimises false error rates but can cause up to 30% performance degradation.
Selecting --expensive-definedness-checks=no causes Memcheck to use the cheapest instrumentation
possible. This maximises performance but will normally give an unusably high false error rate.
The default setting, --expensive-definedness-checks=auto, is strongly recommended. This causes
Memcheck to use the minimum of expensive instrumentation needed to achieve the same false error rate as
--expensive-definedness-checks=yes. It also enables an instrumentation-time analysis pass which aims
to further reduce the costs of accurate instrumentation. Overall, the performance loss is generally around 5% relative
to --expensive-definedness-checks=no, although this is strongly workload dependent. Note that the
exact instrumentation settings in this mode are architecture dependent.
78
Memcheck: a memory error detector
--keep-stacktraces=alloc|free|alloc-and-free|alloc-then-free|none [default:
alloc-and-free]
Controls which stack trace(s) to keep for malloc’d and/or free’d blocks.
With alloc-then-free, a stack trace is recorded at allocation time, and is associated with the block. When the
block is freed, a second stack trace is recorded, and this replaces the allocation stack trace. As a result, any "use after
free" errors relating to this block can only show a stack trace for where the block was freed.
With alloc-and-free, both allocation and the deallocation stack traces for the block are stored. Hence a "use
after free" error will show both, which may make the error easier to diagnose. Compared to alloc-then-free,
this setting slightly increases Valgrind’s memory use as the block contains two references instead of one.
With alloc, only the allocation stack trace is recorded (and reported). With free, only the deallocation stack trace
is recorded (and reported). These values somewhat decrease Valgrind’s memory and cpu usage. They can be useful
depending on the error types you are searching for and the level of detail you need to analyse them. For example, if
you are only interested in memory leak errors, it is sufficient to record the allocation stack traces.
With none, no stack traces are recorded for malloc and free operations. If your program allocates a lot of blocks
and/or allocates/frees from many different stack traces, this can significantly decrease cpu and/or memory required.
Of course, few details will be reported for errors related to heap blocks.
Note that once a stack trace is recorded, Valgrind keeps the stack trace in memory even if it is not referenced
by any block. Some programs (for example, recursive algorithms) can generate a huge number of stack traces.
If Valgrind uses too much memory in such circumstances, you can reduce the memory required with the options
--keep-stacktraces and/or by using a smaller value for the option --num-callers.
If you want to use --xtree-memory=full memory profiling (see Execution Trees ), then you cannot specify
--keep-stacktraces=free or --keep-stacktraces=none.
--freelist-vol=<number> [default: 20000000]
When the client program releases memory using free (in C) or delete (C++), that memory is not immediately made
available for re-allocation. Instead, it is marked inaccessible and placed in a queue of freed blocks. The purpose
is to defer as long as possible the point at which freed-up memory comes back into circulation. This increases the
chance that Memcheck will be able to detect invalid accesses to blocks for some significant period of time after they
have been freed.
This option specifies the maximum total size, in bytes, of the blocks in the queue. The default value is twenty million
bytes. Increasing this increases the total amount of memory used by Memcheck but may detect invalid uses of freed
blocks which would otherwise go undetected.
--freelist-big-blocks=<number> [default: 1000000]
When making blocks from the queue of freed blocks available for re-allocation, Memcheck will in priority re-circulate
the blocks with a size greater or equal to --freelist-big-blocks. This ensures that freeing big blocks (in
particular freeing blocks bigger than --freelist-vol) does not immediately lead to a re-circulation of all (or a lot
of) the small blocks in the free list. In other words, this option increases the likelihood to discover dangling pointers
for the "small" blocks, even when big blocks are freed.
Setting a value of 0 means that all the blocks are re-circulated in a FIFO order.
79
Memcheck: a memory error detector
--workaround-gcc296-bugs=<yes|no> [default: no]
When enabled, assume that reads and writes some small distance below the stack pointer are due to bugs in GCC 2.96,
and does not report them. The "small distance" is 256 bytes by default. Note that GCC 2.96 is the default compiler
on some ancient Linux distributions (RedHat 7.X) and so you may need to use this option. Do not use it if you do not
have to, as it can cause real errors to be overlooked. A better alternative is to use a more recent GCC in which this
bug is fixed.
You may also need to use this option when working with GCC 3.X or 4.X on 32-bit PowerPC Linux. This is because
GCC generates code which occasionally accesses below the stack pointer, particularly for floating-point to/from integer
conversions. This is in violation of the 32-bit PowerPC ELF specification, which makes no provision for locations
below the stack pointer to be accessible.
This option is deprecated as of version 3.12 and may be removed from future versions. You should instead use
--ignore-range-below-sp to specify the exact range of offsets below the stack pointer that should be ignored.
A suitable equivalent is --ignore-range-below-sp=1024-1.
--ignore-range-below-sp=<number>-<number>
This is a more general replacement for the deprecated --workaround-gcc296-bugs option. When specified,
it causes Memcheck not to report errors for accesses at the specified offsets below the stack pointer. The two offsets
must be positive decimal numbers and -- somewhat counterintuitively -- the first one must be larger, in order to imply
a non-wraparound address range to ignore. For example, to ignore 4 byte accesses at 8192 bytes below the stack
pointer, use --ignore-range-below-sp=8192-8189. Only one range may be specified.
--show-mismatched-frees=<yes|no> [default: yes]
When enabled, Memcheck checks that heap blocks are deallocated using a function that matches the allocating
function. That is, it expects free to be used to deallocate blocks allocated by malloc,delete for blocks
allocated by new, and delete[] for blocks allocated by new[]. If a mismatch is detected, an error is reported.
This is in general important because in some environments, freeing with a non-matching function can cause crashes.
There is however a scenario where such mismatches cannot be avoided. That is when the user provides imple-
mentations of new/new[] that call malloc and of delete/delete[] that call free, and these functions are
asymmetrically inlined. For example, imagine that delete[] is inlined but new[] is not. The result is that Mem-
check "sees" all delete[] calls as direct calls to free, even when the program source contains no mismatched
calls.
This causes a lot of confusing and irrelevant error reports. --show-mismatched-frees=no disables these
checks. It is not generally advisable to disable them, though, because you may miss real errors as a result.
--ignore-ranges=0xPP-0xQQ[,0xRR-0xSS]
Any ranges listed in this option (and multiple ranges can be specified, separated by commas) will be ignored by
Memcheck’s addressability checking.
--malloc-fill=<hexnumber>
Fills blocks allocated by malloc,new, etc, but not by calloc, with the specified byte. This can be useful when
trying to shake out obscure memory corruption problems. The allocated area is still regarded by Memcheck as unde-
fined -- this option only affects its contents. Note that --malloc-fill does not affect a block of memory when it
is used as argument to client requests VALGRIND_MEMPOOL_ALLOC or VALGRIND_MALLOCLIKE_BLOCK.
80
Memcheck: a memory error detector
--free-fill=<hexnumber>
Fills blocks freed by free,delete, etc, with the specified byte value. This can be useful when trying to shake
out obscure memory corruption problems. The freed area is still regarded by Memcheck as not valid for access --
this option only affects its contents. Note that --free-fill does not affect a block of memory when it is used as
argument to client requests VALGRIND_MEMPOOL_FREE or VALGRIND_FREELIKE_BLOCK.
4.4. Writing suppression files
The basic suppression format is described in Suppressing errors.
The suppression-type (second) line should have the form:
Memcheck:suppression_type
The Memcheck suppression types are as follows:
Value1,Value2,Value4,Value8,Value16, meaning an uninitialised-value error when using a value of 1,
2, 4, 8 or 16 bytes.
Cond (or its old name, Value0), meaning use of an uninitialised CPU condition code.
Addr1,Addr2,Addr4,Addr8,Addr16, meaning an invalid address during a memory access of 1, 2, 4, 8 or 16
bytes respectively.
Jump, meaning an jump to an unaddressable location error.
Param, meaning an invalid system call parameter error.
Free, meaning an invalid or mismatching free.
Overlap, meaning a src /dst overlap in memcpy or a similar function.
Leak, meaning a memory leak.
Param errors have a mandatory extra information line at this point, which is the name of the offending system call
parameter.
Leak errors have an optional extra information line, with the following format:
match-leak-kinds:<set>
where <set> specifies which leak kinds are matched by this suppression entry. <set> is specified in the same way
as with the option --show-leak-kinds, that is, one of the following:
a comma separated list of one or more of definite indirect possible reachable.
all to specify the complete set (all leak kinds).
none for the empty set.
81
Memcheck: a memory error detector
If this optional extra line is not present, the suppression entry will match all leak kinds.
Be aware that leak suppressions that are created using --gen-suppressions will contain this optional extra line,
and therefore may match fewer leaks than you expect. You may want to remove the line before using the generated
suppressions.
The other Memcheck error kinds do not have extra lines.
If you give the -v option, Valgrind will print the list of used suppressions at the end of execution. For a leak
suppression, this output gives the number of different loss records that match the suppression, and the number of
bytes and blocks suppressed by the suppression. If the run contains multiple leak checks, the number of bytes and
blocks are reset to zero before each new leak check. Note that the number of different loss records is not reset to zero.
In the example below, in the last leak search, 7 blocks and 96 bytes have been suppressed by a suppression with the
name some_leak_suppression:
--21041-- used_suppression: 10 some_other_leak_suppression s.supp:14 suppressed: 12,400 bytes in 1 blocks
--21041-- used_suppression: 39 some_leak_suppression s.supp:2 suppressed: 96 bytes in 7 blocks
For ValueN and AddrN errors, the first line of the calling context is either the name of the function in which the error
occurred, or, failing that, the full path of the .so file or executable containing the error location. For Free errors, the
first line is the name of the function doing the freeing (eg, free,__builtin_vec_delete, etc). For Overlap
errors, the first line is the name of the function with the overlapping arguments (eg. memcpy,strcpy, etc).
The last part of any suppression specifies the rest of the calling context that needs to be matched.
4.5. Details of Memcheck’s checking machinery
Read this section if you want to know, in detail, exactly what and how Memcheck is checking.
4.5.1. Valid-value (V) bits
It is simplest to think of Memcheck implementing a synthetic CPU which is identical to a real CPU, except for one
crucial detail. Every bit (literally) of data processed, stored and handled by the real CPU has, in the synthetic CPU, an
associated "valid-value" bit, which says whether or not the accompanying bit has a legitimate value. In the discussions
which follow, this bit is referred to as the V (valid-value) bit.
Each byte in the system therefore has a 8 V bits which follow it wherever it goes. For example, when the CPU loads a
word-size item (4 bytes) from memory, it also loads the corresponding 32 V bits from a bitmap which stores the V bits
for the process’ entire address space. If the CPU should later write the whole or some part of that value to memory at
a different address, the relevant V bits will be stored back in the V-bit bitmap.
In short, each bit in the system has (conceptually) an associated V bit, which follows it around everywhere, even
inside the CPU. Yes, all the CPU’s registers (integer, floating point, vector and condition registers) have their own V
bit vectors. For this to work, Memcheck uses a great deal of compression to represent the V bits compactly.
Copying values around does not cause Memcheck to check for, or report on, errors. However, when a value is
used in a way which might conceivably affect your program’s externally-visible behaviour, the associated V bits are
immediately checked. If any of these indicate that the value is undefined (even partially), an error is reported.
Here’s an (admittedly nonsensical) example:
82
Memcheck: a memory error detector
int i, j;
int a[10], b[10];
for(i=0;i<10;i++){
j = a[i];
b[i] = j;
}
Memcheck emits no complaints about this, since it merely copies uninitialised values from a[] into b[], and doesn’t
use them in a way which could affect the behaviour of the program. However, if the loop is changed to:
for(i=0;i<10;i++){
j += a[i];
}
if(j==77)
printf("hello there\n");
then Memcheck will complain, at the if, that the condition depends on uninitialised values. Note that it doesn’t
complain at the j += a[i];, since at that point the undefinedness is not "observable". It’s only when a decision
has to be made as to whether or not to do the printf -- an observable action of your program -- that Memcheck
complains.
Most low level operations, such as adds, cause Memcheck to use the V bits for the operands to calculate the V bits for
the result. Even if the result is partially or wholly undefined, it does not complain.
Checks on definedness only occur in three places: when a value is used to generate a memory address, when control
flow decision needs to be made, and when a system call is detected, Memcheck checks definedness of parameters as
required.
If a check should detect undefinedness, an error message is issued. The resulting value is subsequently regarded as
well-defined. To do otherwise would give long chains of error messages. In other words, once Memcheck reports an
undefined value error, it tries to avoid reporting further errors derived from that same undefined value.
This sounds overcomplicated. Why not just check all reads from memory, and complain if an undefined value is
loaded into a CPU register? Well, that doesn’t work well, because perfectly legitimate C programs routinely copy
uninitialised values around in memory, and we don’t want endless complaints about that. Here’s the canonical
example. Consider a struct like this:
struct S { int x; char c; };
struct S s1, s2;
s1.x = 42;
s1.c = ’z’;
s2 = s1;
The question to ask is: how large is struct S, in bytes? An int is 4 bytes and a char one byte, so perhaps a
struct S occupies 5 bytes? Wrong. All non-toy compilers we know of will round the size of struct S up to
a whole number of words, in this case 8 bytes. Not doing this forces compilers to generate truly appalling code for
accessing arrays of struct Ss on some architectures.
83
Memcheck: a memory error detector
So s1 occupies 8 bytes, yet only 5 of them will be initialised. For the assignment s2 = s1, GCC generates code
to copy all 8 bytes wholesale into s2 without regard for their meaning. If Memcheck simply checked values as they
came out of memory, it would yelp every time a structure assignment like this happened. So the more complicated
behaviour described above is necessary. This allows GCC to copy s1 into s2 any way it likes, and a warning will
only be emitted if the uninitialised values are later used.
As explained above, Memcheck maintains 8 V bits for each byte in your process, including for bytes that are in
shared memory. However, the same piece of shared memory can be mapped multiple times, by several processes
or even by the same process (for example, if the process wants a read-only and a read-write mapping of the same
page). For such multiple mappings, Memcheck tracks the V bits for each mapping independently. This can
lead to false positive errors, as the shared memory can be initialised via a first mapping, and accessed via another
mapping. The access via this other mapping will have its own V bits, which have not been changed when the memory
was initialised via the first mapping. The bypass for these false positives is to use Memcheck’s client requests
VALGRIND_MAKE_MEM_DEFINED and VALGRIND_MAKE_MEM_UNDEFINED to inform Memcheck about what
your program does (or what another process does) to these shared memory mappings.
4.5.2. Valid-address (A) bits
Notice that the previous subsection describes how the validity of values is established and maintained without having
to say whether the program does or does not have the right to access any particular memory location. We now consider
the latter question.
As described above, every bit in memory or in the CPU has an associated valid-value (V) bit. In addition, all bytes
in memory, but not in the CPU, have an associated valid-address (A) bit. This indicates whether or not the program
can legitimately read or write that location. It does not give any indication of the validity of the data at that location
-- that’s the job of the V bits -- only whether or not the location may be accessed.
Every time your program reads or writes memory, Memcheck checks the A bits associated with the address. If any of
them indicate an invalid address, an error is emitted. Note that the reads and writes themselves do not change the A
bits, only consult them.
So how do the A bits get set/cleared? Like this:
When the program starts, all the global data areas are marked as accessible.
When the program does malloc/new, the A bits for exactly the area allocated, and not a byte more, are marked as
accessible. Upon freeing the area the A bits are changed to indicate inaccessibility.
When the stack pointer register (SP) moves up or down, A bits are set. The rule is that the area from SP up to
the base of the stack is marked as accessible, and below SP is inaccessible. (If that sounds illogical, bear in mind
that the stack grows down, not up, on almost all Unix systems, including GNU/Linux.) Tracking SP like this has
the useful side-effect that the section of stack used by a function for local variables etc is automatically marked
accessible on function entry and inaccessible on exit.
When doing system calls, A bits are changed appropriately. For example, mmap magically makes files appear in the
process’ address space, so the A bits must be updated if mmap succeeds.
Optionally, your program can tell Memcheck about such changes explicitly, using the client request mechanism
described above.
4.5.3. Putting it all together
Memcheck’s checking machinery can be summarised as follows:
84
Memcheck: a memory error detector
Each byte in memory has 8 associated V (valid-value) bits, saying whether or not the byte has a defined value, and
a single A (valid-address) bit, saying whether or not the program currently has the right to read/write that address.
As mentioned above, heavy use of compression means the overhead is typically around 25%.
When memory is read or written, the relevant A bits are consulted. If they indicate an invalid address, Memcheck
emits an Invalid read or Invalid write error.
When memory is read into the CPU’s registers, the relevant V bits are fetched from memory and stored in the
simulated CPU. They are not consulted.
When a register is written out to memory, the V bits for that register are written back to memory too.
When values in CPU registers are used to generate a memory address, or to determine the outcome of a conditional
branch, the V bits for those values are checked, and an error emitted if any of them are undefined.
When values in CPU registers are used for any other purpose, Memcheck computes the V bits for the result, but
does not check them.
Once the V bits for a value in the CPU have been checked, they are then set to indicate validity. This avoids long
chains of errors.
When values are loaded from memory, Memcheck checks the A bits for that location and issues an illegal-address
warning if needed. In that case, the V bits loaded are forced to indicate Valid, despite the location being invalid.
This apparently strange choice reduces the amount of confusing information presented to the user. It avoids the
unpleasant phenomenon in which memory is read from a place which is both unaddressable and contains invalid
values, and, as a result, you get not only an invalid-address (read/write) error, but also a potentially large set of
uninitialised-value errors, one for every time the value is used.
There is a hazy boundary case to do with multi-byte loads from addresses which are partially valid and partially
invalid. See details of the option --partial-loads-ok for details.
Memcheck intercepts calls to malloc,calloc,realloc,valloc,memalign,free,new,new[],delete
and delete[]. The behaviour you get is:
malloc/new/new[]: the returned memory is marked as addressable but not having valid values. This means you
have to write to it before you can read it.
calloc: returned memory is marked both addressable and valid, since calloc clears the area to zero.
realloc: if the new size is larger than the old, the new section is addressable but invalid, as with malloc. If the
new size is smaller, the dropped-off section is marked as unaddressable. You may only pass to realloc a pointer
previously issued to you by malloc/calloc/realloc.
free/delete/delete[]: you may only pass to these functions a pointer previously issued to you by the
corresponding allocation function. Otherwise, Memcheck complains. If the pointer is indeed valid, Memcheck
marks the entire area it points at as unaddressable, and places the block in the freed-blocks-queue. The aim is
to defer as long as possible reallocation of this block. Until that happens, all attempts to access it will elicit an
invalid-address error, as you would hope.
4.6. Memcheck Monitor Commands
The Memcheck tool provides monitor commands handled by Valgrind’s built-in gdbserver (see Monitor command
handling by the Valgrind gdbserver).
85
Memcheck: a memory error detector
xb <addr> [<len>] shows the definedness (V) bits and values for <len> (default 1) bytes starting at <addr>.
For each 8 bytes, two lines are output.
The first line shows the validity bits for 8 bytes. The definedness of each byte in the range is given using two
hexadecimal digits. These hexadecimal digits encode the validity of each bit of the corresponding byte, using 0
if the bit is defined and 1 if the bit is undefined. If a byte is not addressable, its validity bits are replaced by __ (a
double underscore).
The second line shows the values of the bytes below the corresponding validity bits. The format used to show the
bytes data is similar to the GDB command ’x /<len>xb <addr>’. The value for a non addressable bytes is shown as
?? (two question marks).
In the following example, string10 is an array of 10 characters, in which the even numbered bytes are undefined.
In the below example, the byte corresponding to string10[5] is not addressable.
(gdb) p &string10
$4 = (char (*)[10]) 0x804a2f0
(gdb) mo xb 0x804a2f0 10
ff 00 ff 00 ff __ ff 00
0x804A2F0: 0x3f 0x6e 0x3f 0x65 0x3f 0x?? 0x3f 0x65
ff 00
0x804A2F8: 0x3f 0x00
Address 0x804A2F0 len 10 has 1 bytes unaddressable
(gdb)
The command xb cannot be used with registers. To get the validity bits of a register, you must start Valgrind with
the option --vgdb-shadow-registers=yes. The validity bits of a register can then be obtained by printing
the ’shadow 1’ corresponding register. In the below x86 example, the register eax has all its bits undefined, while
the register ebx is fully defined.
(gdb) p /x $eaxs1
$9 = 0xffffffff
(gdb) p /x $ebxs1
$10 = 0x0
(gdb)
get_vbits <addr> [<len>] shows the definedness (V) bits for <len> (default 1) bytes starting at <addr>
using the same convention as the xb command. get_vbits only shows the V bits (grouped by 4 bytes). It does
not show the values. If you want to associate V bits with the corresponding byte values, the xb command will be
easier to use, in particular on little endian computers when associating undefined parts of an integer with their V
bits values.
The following example shows the result of get_vibts on the string10 used in the xb command explanation.
(gdb) monitor get_vbits 0x804a2f0 10
ff00ff00 ff__ff00 ff00
Address 0x804A2F0 len 10 has 1 bytes unaddressable
(gdb)
86
Memcheck: a memory error detector
make_memory [noaccess|undefined|defined|Definedifaddressable] <addr> [<len>]
marks the range of <len> (default 1) bytes at <addr> as having the given status. Parameter noaccess marks
the range as non-accessible, so Memcheck will report an error on any access to it. undefined or defined
mark the area as accessible, but Memcheck regards the bytes in it respectively as having undefined or defined
values. Definedifaddressable marks as defined, bytes in the range which are already addressible, but
makes no change to the status of bytes in the range which are not addressible. Note that the first letter of
Definedifaddressable is an uppercase D to avoid confusion with defined.
In the following example, the first byte of the string10 is marked as defined:
(gdb) monitor make_memory defined 0x8049e28 1
(gdb) monitor get_vbits 0x8049e28 10
0000ff00 ff00ff00 ff00
(gdb)
check_memory [addressable|defined] <addr> [<len>] checks that the range of <len> (default 1)
bytes at <addr> has the specified accessibility. It then outputs a description of <addr>. In the following example, a
detailed description is available because the option --read-var-info=yes was given at Valgrind startup:
(gdb) monitor check_memory defined 0x8049e28 1
Address 0x8049E28 len 1 defined
==14698== Location 0x8049e28 is 0 bytes inside string10[0],
==14698== declared at prog.c:10, in frame #0 of thread 1
(gdb)
leak_check [full*|summary|xtleak] [kinds <set>|reachable|possibleleak*|definiteleak]
[heuristics heur1,heur2,...] [increased*|changed|any] [unlimited*|limited
<max_loss_records_output>] performs a leak check. The *in the arguments indicates the default
values.
If the [full*|summary|xtleak] argument is summary, only a summary of the leak search is given;
otherwise a full leak report is produced. A full leak report gives detailed information for each leak: the stack
trace where the leaked blocks were allocated, the number of blocks leaked and their total size. When a full report
is requested, the next two arguments further specify what kind of leaks to report. A leak’s details are shown if they
match both the second and third argument. A full leak report might output detailed information for many leaks.
The nr of leaks for which information is output can be controlled using the limited argument followed by the
maximum nr of leak records to output. If this maximum is reached, the leak search outputs the records with the
biggest number of bytes.
The value xtleak also produces a full leak report, but output it as an xtree in a file xtleak.kcg.%p.%n (see --
log-file). See Execution Trees for a detailed explanation about execution trees formats. See --xtree-leak for the
description of the events in a xtree leak file.
The kinds argument controls what kind of blocks are shown for a full leak search. The set of leak kinds to show
can be specified using a <set> similarly to the command line option --show-leak-kinds. Alternatively, the
value definiteleak is equivalent to kinds definite, the value possibleleak is equivalent to kinds
definite,possible : it will also show possibly leaked blocks, .i.e those for which only an interior pointer was
found. The value reachable will show all block categories (i.e. is equivalent to kinds all).
The heuristics argument controls the heuristics used during the leak search. The set of heuristics to use can
be specified using a <set> similarly to the command line option --leak-check-heuristics. The default
value for the heuristics argument is heuristics none.
87
Memcheck: a memory error detector
The [increased*|changed|any] argument controls what kinds of changes are shown for a full leak
search. The value increased specifies that only block allocation stacks with an increased number of leaked
bytes or blocks since the previous leak check should be shown. The value changed specifies that allocation
stacks with any change since the previous leak check should be shown. The value any specifies that all leak entries
should be shown, regardless of any increase or decrease. When If increased or changed are specified, the
leak report entries will show the delta relative to the previous leak report.
The following example shows usage of the leak_check monitor command on the memcheck/tests/leak-cases.c
regression test. The first command outputs one entry having an increase in the leaked bytes. The second command
is the same as the first command, but uses the abbreviated forms accepted by GDB and the Valgrind gdbserver. It
only outputs the summary information, as there was no increase since the previous leak search.
(gdb) monitor leak_check full possibleleak increased
==19520== 16 (+16) bytes in 1 (+1) blocks are possibly lost in loss record 9 of 12
==19520== at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19520== by 0x80484D5: mk (leak-cases.c:52)
==19520== by 0x804855F: f (leak-cases.c:81)
==19520== by 0x80488E0: main (leak-cases.c:107)
==19520==
==19520== LEAK SUMMARY:
==19520== definitely lost: 32 (+0) bytes in 2 (+0) blocks
==19520== indirectly lost: 16 (+0) bytes in 1 (+0) blocks
==19520== possibly lost: 32 (+16) bytes in 2 (+1) blocks
==19520== still reachable: 96 (+16) bytes in 6 (+1) blocks
==19520== suppressed: 0 (+0) bytes in 0 (+0) blocks
==19520== Reachable blocks (those to which a pointer was found) are not shown.
==19520== To see them, add ’reachable any’ args to leak_check
==19520==
(gdb) mo l
==19520== LEAK SUMMARY:
==19520== definitely lost: 32 (+0) bytes in 2 (+0) blocks
==19520== indirectly lost: 16 (+0) bytes in 1 (+0) blocks
==19520== possibly lost: 32 (+0) bytes in 2 (+0) blocks
==19520== still reachable: 96 (+0) bytes in 6 (+0) blocks
==19520== suppressed: 0 (+0) bytes in 0 (+0) blocks
==19520== Reachable blocks (those to which a pointer was found) are not shown.
==19520== To see them, add ’reachable any’ args to leak_check
==19520==
(gdb)
Note that when using Valgrind’s gdbserver, it is not necessary to rerun with --leak-check=full
--show-reachable=yes to see the reachable blocks. You can obtain the same information without rerunning
by using the GDB command monitor leak_check full reachable any (or, using abbreviation: mo
lfra).
block_list <loss_record_nr>|<loss_record_nr_from>..<loss_record_nr_to>
[unlimited*|limited <max_blocks>] [heuristics heur1,heur2,...] shows the list of
blocks belonging to <loss_record_nr> (or to the loss records range <loss_record_nr_from>..<loss_record_nr_to>).
The nr of blocks to print can be controlled using the limited argument followed by the maximum nr of blocks
to output. If one or more heuristics are given, only prints the loss records and blocks found via one of the given
heur1,heur2,... heuristics.
88
Memcheck: a memory error detector
A leak search merges the allocated blocks in loss records : a loss record re-groups all blocks having the same state
(for example, Definitely Lost) and the same allocation backtrace. Each loss record is identified in the leak search
result by a loss record number. The block_list command shows the loss record information followed by the
addresses and sizes of the blocks which have been merged in the loss record. If a block was found using an heuristic,
the block size is followed by the heuristic.
If a directly lost block causes some other blocks to be indirectly lost, the block_list command will also show these
indirectly lost blocks. The indirectly lost blocks will be indented according to the level of indirection between the
directly lost block and the indirectly lost block(s). Each indirectly lost block is followed by the reference of its loss
record.
The block_list command can be used on the results of a leak search as long as no block has been freed after this
leak search: as soon as the program frees a block, a new leak search is needed before block_list can be used again.
In the below example, the program leaks a tree structure by losing the pointer to the block A (top of the tree). So,
the block A is directly lost, causing an indirect loss of blocks B to G. The first block_list command shows the loss
record of A (a definitely lost block with address 0x4028028, size 16). The addresses and sizes of the indirectly lost
blocks due to block A are shown below the block A. The second command shows the details of one of the indirect
loss records output by the first command.
A
/ \
B C
/ \ / \
D E F G
(gdb) bt
#0 main () at leak-tree.c:69
(gdb) monitor leak_check full any
==19552== 112 (16 direct, 96 indirect) bytes in 1 blocks are definitely lost in loss record 7 of 7
==19552== at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19552== by 0x80484D5: mk (leak-tree.c:28)
==19552== by 0x80484FC: f (leak-tree.c:41)
==19552== by 0x8048856: main (leak-tree.c:63)
==19552==
==19552== LEAK SUMMARY:
==19552== definitely lost: 16 bytes in 1 blocks
==19552== indirectly lost: 96 bytes in 6 blocks
==19552== possibly lost: 0 bytes in 0 blocks
==19552== still reachable: 0 bytes in 0 blocks
==19552== suppressed: 0 bytes in 0 blocks
==19552==
(gdb) monitor block_list 7
==19552== 112 (16 direct, 96 indirect) bytes in 1 blocks are definitely lost in loss record 7 of 7
==19552== at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19552== by 0x80484D5: mk (leak-tree.c:28)
==19552== by 0x80484FC: f (leak-tree.c:41)
==19552== by 0x8048856: main (leak-tree.c:63)
==19552== 0x4028028[16]
==19552== 0x4028068[16] indirect loss record 1
89
Memcheck: a memory error detector
==19552== 0x40280E8[16] indirect loss record 3
==19552== 0x4028128[16] indirect loss record 4
==19552== 0x40280A8[16] indirect loss record 2
==19552== 0x4028168[16] indirect loss record 5
==19552== 0x40281A8[16] indirect loss record 6
(gdb) mo b 2
==19552== 16 bytes in 1 blocks are indirectly lost in loss record 2 of 7
==19552== at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19552== by 0x80484D5: mk (leak-tree.c:28)
==19552== by 0x8048519: f (leak-tree.c:43)
==19552== by 0x8048856: main (leak-tree.c:63)
==19552== 0x40280A8[16]
==19552== 0x4028168[16] indirect loss record 5
==19552== 0x40281A8[16] indirect loss record 6
(gdb)
who_points_at <addr> [<len>] shows all the locations where a pointer to addr is found. If len is equal
to 1, the command only shows the locations pointing exactly at addr (i.e. the "start pointers" to addr). If len is > 1,
"interior pointers" pointing at the len first bytes will also be shown.
The locations searched for are the same as the locations used in the leak search. So, who_points_at can a.o. be
used to show why the leak search still can reach a block, or can search for dangling pointers to a freed block. Each
location pointing at addr (or pointing inside addr if interior pointers are being searched for) will be described.
In the below example, the pointers to the ’tree block A’ (see example in command block_list) is shown before
the tree was leaked. The descriptions are detailed as the option --read-var-info=yes was given at Valgrind
startup. The second call shows the pointers (start and interior pointers) to block G. The block G (0x40281A8) is
reachable via block C (0x40280a8) and register ECX of tid 1 (tid is the Valgrind thread id). It is "interior reachable"
via the register EBX.
(gdb) monitor who_points_at 0x4028028
==20852== Searching for pointers to 0x4028028
==20852== *0x8049e20 points at 0x4028028
==20852== Location 0x8049e20 is 0 bytes inside global var "t"
==20852== declared at leak-tree.c:35
(gdb) monitor who_points_at 0x40281A8 16
==20852== Searching for pointers pointing in 16 bytes from 0x40281a8
==20852== *0x40280ac points at 0x40281a8
==20852== Address 0x40280ac is 4 bytes inside a block of size 16 alloc’d
==20852== at 0x40070B4: malloc (vg_replace_malloc.c:263)
==20852== by 0x80484D5: mk (leak-tree.c:28)
==20852== by 0x8048519: f (leak-tree.c:43)
==20852== by 0x8048856: main (leak-tree.c:63)
==20852== tid 1 register ECX points at 0x40281a8
==20852== tid 1 register EBX interior points at 2 bytes inside 0x40281a8
(gdb)
When who_points_at finds an interior pointer, it will report the heuristic(s) with which this interior
pointer will be considered as reachable. Note that this is done independently of the value of the option
--leak-check-heuristics. In the below example, the loss record 6 indicates a possibly lost block.
90
Memcheck: a memory error detector
who_points_at reports that there is an interior pointer pointing in this block, and that the block can be con-
sidered reachable using the heuristic multipleinheritance.
(gdb) monitor block_list 6
==3748== 8 bytes in 1 blocks are possibly lost in loss record 6 of 7
==3748== at 0x4007D77: operator new(unsigned int) (vg_replace_malloc.c:313)
==3748== by 0x8048954: main (leak_cpp_interior.cpp:43)
==3748== 0x402A0E0[8]
(gdb) monitor who_points_at 0x402A0E0 8
==3748== Searching for pointers pointing in 8 bytes from 0x402a0e0
==3748== *0xbe8ee078 interior points at 4 bytes inside 0x402a0e0
==3748== Address 0xbe8ee078 is on thread 1’s stack
==3748== block at 0x402a0e0 considered reachable by ptr 0x402a0e4 using multipleinheritance heuristic
(gdb)
xtmemory [<filename> default xtmemory.kcg.%p.%n] requests Memcheck tool to produce an
xtree heap memory report. See Execution Trees for a detailed explanation about execution trees.
4.7. Client Requests
The following client requests are defined in memcheck.h. See memcheck.h for exact details of their arguments.
VALGRIND_MAKE_MEM_NOACCESS,VALGRIND_MAKE_MEM_UNDEFINED and VALGRIND_MAKE_MEM_DEFINED.
These mark address ranges as completely inaccessible, accessible but containing undefined data, and accessible
and containing defined data, respectively. They return -1, when run on Valgrind and 0 otherwise.
VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE. This is just like VALGRIND_MAKE_MEM_DEFINED
but only affects those bytes that are already addressable.
VALGRIND_CHECK_MEM_IS_ADDRESSABLE and VALGRIND_CHECK_MEM_IS_DEFINED: check immedi-
ately whether or not the given address range has the relevant property, and if not, print an error message. Also, for
the convenience of the client, returns zero if the relevant property holds; otherwise, the returned value is the address
of the first byte for which the property is not true. Always returns 0 when not run on Valgrind.
VALGRIND_CHECK_VALUE_IS_DEFINED: a quick and easy way to find out whether Valgrind thinks a particular
value (lvalue, to be precise) is addressable and defined. Prints an error message if not. It has no return value.
VALGRIND_DO_LEAK_CHECK: does a full memory leak check (like --leak-check=full) right now. This is
useful for incrementally checking for leaks between arbitrary places in the program’s execution. It has no return
value.
VALGRIND_DO_ADDED_LEAK_CHECK: same as VALGRIND_DO_LEAK_CHECK but only shows the entries
for which there was an increase in leaked bytes or leaked number of blocks since the previous leak search. It has
no return value.
VALGRIND_DO_CHANGED_LEAK_CHECK: same as VALGRIND_DO_LEAK_CHECK but only shows the entries
for which there was an increase or decrease in leaked bytes or leaked number of blocks since the previous leak
search. It has no return value.
91
Memcheck: a memory error detector
VALGRIND_DO_QUICK_LEAK_CHECK: like VALGRIND_DO_LEAK_CHECK, except it produces only a leak
summary (like --leak-check=summary). It has no return value.
VALGRIND_COUNT_LEAKS: fills in the four arguments with the number of bytes of memory found by
the previous leak check to be leaked (i.e. the sum of direct leaks and indirect leaks), dubious, reach-
able and suppressed. This is useful in test harness code, after calling VALGRIND_DO_LEAK_CHECK or
VALGRIND_DO_QUICK_LEAK_CHECK.
VALGRIND_COUNT_LEAK_BLOCKS: identical to VALGRIND_COUNT_LEAKS except that it returns the number
of blocks rather than the number of bytes in each category.
VALGRIND_GET_VBITS and VALGRIND_SET_VBITS: allow you to get and set the V (validity) bits for an
address range. You should probably only set V bits that you have got with VALGRIND_GET_VBITS. Only for
those who really know what they are doing.
VALGRIND_CREATE_BLOCK and VALGRIND_DISCARD.VALGRIND_CREATE_BLOCK takes an address, a
number of bytes and a character string. The specified address range is then associated with that string. When
Memcheck reports an invalid access to an address in the range, it will describe it in terms of this block rather than
in terms of any other block it knows about. Note that the use of this macro does not actually change the state of
memory in any way -- it merely gives a name for the range.
At some point you may want Memcheck to stop reporting errors in terms of the block named by
VALGRIND_CREATE_BLOCK. To make this possible, VALGRIND_CREATE_BLOCK returns a "block
handle", which is a C int value. You can pass this block handle to VALGRIND_DISCARD. After doing so,
Valgrind will no longer relate addressing errors in the specified range to the block. Passing invalid handles to
VALGRIND_DISCARD is harmless.
4.8. Memory Pools: describing and working
with custom allocators
Some programs use custom memory allocators, often for performance reasons. Left to itself, Memcheck is unable
to understand the behaviour of custom allocation schemes as well as it understands the standard allocators, and so
may miss errors and leaks in your program. What this section describes is a way to give Memcheck enough of a
description of your custom allocator that it can make at least some sense of what is happening.
There are many different sorts of custom allocator, so Memcheck attempts to reason about them using a loose, abstract
model. We use the following terminology when describing custom allocation systems:
Custom allocation involves a set of independent "memory pools".
• Memcheck’s notion of a a memory pool consists of a single "anchor address" and a set of non-overlapping "chunks"
associated with the anchor address.
Typically a pool’s anchor address is the address of a book-keeping "header" structure.
Typically the pool’s chunks are drawn from a contiguous "superblock" acquired through the system malloc or
mmap.
92
Memcheck: a memory error detector
Keep in mind that the last two points above say "typically": the Valgrind mempool client request API is intentionally
vague about the exact structure of a mempool. There is no specific mention made of headers or superblocks.
Nevertheless, the following picture may help elucidate the intention of the terms in the API:
"pool"
(anchor address)
|
v
+--------+---+
| header | o |
+--------+-|-+
|
v superblock
+------+---+--------------+---+------------------+
| |rzB| allocation |rzB| |
+------+---+--------------+---+------------------+
^ ^
| |
"addr" "addr"+"size"
Note that the header and the superblock may be contiguous or discontiguous, and there may be multiple superblocks
associated with a single header; such variations are opaque to Memcheck. The API only requires that your allocation
scheme can present sensible values of "pool", "addr" and "size".
Typically, before making client requests related to mempools, a client program will have allocated
such a header and superblock for their mempool, and marked the superblock NOACCESS using the
VALGRIND_MAKE_MEM_NOACCESS client request.
When dealing with mempools, the goal is to maintain a particular invariant condition: that Memcheck believes the
unallocated portions of the pool’s superblock (including redzones) are NOACCESS. To maintain this invariant, the
client program must ensure that the superblock starts out in that state; Memcheck cannot make it so, since Memcheck
never explicitly learns about the superblock of a pool, only the allocated chunks within the pool.
Once the header and superblock for a pool are established and properly marked, there are a number of client requests
programs can use to inform Memcheck about changes to the state of a mempool:
VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed): This request registers the address pool as the
anchor address for a memory pool. It also provides a size rzB, specifying how large the redzones placed around
chunks allocated from the pool should be. Finally, it provides an is_zeroed argument that specifies whether the
pool’s chunks are zeroed (more precisely: defined) when allocated.
Upon completion of this request, no chunks are associated with the pool. The request simply tells Memcheck that
the pool exists, so that subsequent calls can refer to it as a pool.
VALGRIND_CREATE_MEMPOOL_EXT(pool, rzB, is_zeroed, flags): Create a memory pool with
some flags (that can be OR-ed together) specifying extended behaviour. When flags is zero, the behaviour is
identical to VALGRIND_CREATE_MEMPOOL.
93
Memcheck: a memory error detector
• The flag VALGRIND_MEMPOOL_METAPOOL specifies that the pieces of memory associated with the pool using
VALGRIND_MEMPOOL_ALLOC will be used by the application as superblocks to dole out MALLOC_LIKE
blocks using VALGRIND_MALLOCLIKE_BLOCK. In other words, a meta pool is a "2 levels" pool : first
level is the blocks described by VALGRIND_MEMPOOL_ALLOC. The second level blocks are described
using VALGRIND_MALLOCLIKE_BLOCK. Note that the association between the pool and the second level
blocks is implicit : second level blocks will be located inside first level blocks. It is necessary to use the
VALGRIND_MEMPOOL_METAPOOL flag for such 2 levels pools, as otherwise valgrind will detect overlapping
memory blocks, and will abort execution (e.g. during leak search).
VALGRIND_MEMPOOL_AUTO_FREE. Such a meta pool can also be marked as an ’auto free’
pool using the flag VALGRIND_MEMPOOL_AUTO_FREE, which must be OR-ed together with the
VALGRIND_MEMPOOL_METAPOOL. For an ’auto free’ pool, VALGRIND_MEMPOOL_FREE will au-
tomatically free the second level blocks that are contained inside the first level block freed with
VALGRIND_MEMPOOL_FREE. In other words, calling VALGRIND_MEMPOOL_FREE will cause implicit calls
to VALGRIND_FREELIKE_BLOCK for all the second level blocks included in the first level block. Note: it is
an error to use the VALGRIND_MEMPOOL_AUTO_FREE flag without the VALGRIND_MEMPOOL_METAPOOL
flag.
VALGRIND_DESTROY_MEMPOOL(pool): This request tells Memcheck that a pool is being torn down. Mem-
check then removes all records of chunks associated with the pool, as well as its record of the pool’s existence. While
destroying its records of a mempool, Memcheck resets the redzones of any live chunks in the pool to NOACCESS.
VALGRIND_MEMPOOL_ALLOC(pool, addr, size): This request informs Memcheck that a size-byte
chunk has been allocated at addr, and associates the chunk with the specified pool. If the pool was created
with nonzero rzB redzones, Memcheck will mark the rzB bytes before and after the chunk as NOACCESS. If
the pool was created with the is_zeroed argument set, Memcheck will mark the chunk as DEFINED, otherwise
Memcheck will mark the chunk as UNDEFINED.
VALGRIND_MEMPOOL_FREE(pool, addr): This request informs Memcheck that the chunk at addr should
no longer be considered allocated. Memcheck will mark the chunk associated with addr as NOACCESS, and
delete its record of the chunk’s existence.
VALGRIND_MEMPOOL_TRIM(pool, addr, size): This request trims the chunks associated with pool.
The request only operates on chunks associated with pool. Trimming is formally defined as:
All chunks entirely inside the range addr..(addr+size-1) are preserved.
All chunks entirely outside the range addr..(addr+size-1) are discarded, as though
VALGRIND_MEMPOOL_FREE was called on them.
All other chunks must intersect with the range addr..(addr+size-1); areas outside the intersection are
marked as NOACCESS, as though they had been independently freed with VALGRIND_MEMPOOL_FREE.
This is a somewhat rare request, but can be useful in implementing the type of mass-free operations common in
custom LIFO allocators.
VALGRIND_MOVE_MEMPOOL(poolA, poolB): This request informs Memcheck that the pool previously
anchored at address poolA has moved to anchor address poolB. This is a rare request, typically only needed
if you realloc the header of a mempool.
No memory-status bits are altered by this request.
94
Memcheck: a memory error detector
VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size): This request informs Memcheck that the
chunk previously allocated at address addrA within pool has been moved and/or resized, and should be changed
to cover the region addrB..(addrB+size-1). This is a rare request, typically only needed if you realloc a
superblock or wish to extend a chunk without changing its memory-status bits.
No memory-status bits are altered by this request.
VALGRIND_MEMPOOL_EXISTS(pool): This request informs the caller whether or not Memcheck is currently
tracking a mempool at anchor address pool. It evaluates to 1 when there is a mempool associated with that address,
0 otherwise. This is a rare request, only useful in circumstances when client code might have lost track of the set of
active mempools.
4.9. Debugging MPI Parallel Programs with
Valgrind
Memcheck supports debugging of distributed-memory applications which use the MPI message passing standard.
This support consists of a library of wrapper functions for the PMPI_*interface. When incorporated into the
application’s address space, either by direct linking or by LD_PRELOAD, the wrappers intercept calls to PMPI_Send,
PMPI_Recv, etc. They then use client requests to inform Memcheck of memory state changes caused by the
function being wrapped. This reduces the number of false positives that Memcheck otherwise typically reports for
MPI applications.
The wrappers also take the opportunity to carefully check size and definedness of buffers passed as arguments to MPI
functions, hence detecting errors such as passing undefined data to PMPI_Send, or receiving data into a buffer which
is too small.
Unlike most of the rest of Valgrind, the wrapper library is subject to a BSD-style license, so you can link it into any
code base you like. See the top of mpi/libmpiwrap.c for license details.
4.9.1. Building and installing the wrappers
The wrapper library will be built automatically if possible. Valgrind’s configure script will look for a suitable mpicc
to build it with. This must be the same mpicc you use to build the MPI application you want to debug. By default,
Valgrind tries mpicc, but you can specify a different one by using the configure-time option --with-mpicc.
Currently the wrappers are only buildable with mpiccs which are based on GNU GCC or Intel’s C++ Compiler.
Check that the configure script prints a line like this:
checking for usable MPI2-compliant mpicc and mpi.h... yes, mpicc
If it says ... no, your mpicc has failed to compile and link a test MPI2 program.
If the configure test succeeds, continue in the usual way with make and make install. The final install tree
should then contain libmpiwrap-<platform>.so.
Compile up a test MPI program (eg, MPI hello-world) and try this:
95
Memcheck: a memory error detector
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so \
mpirun [args] $prefix/bin/valgrind ./hello
You should see something similar to the following
valgrind MPI wrappers 31901: Active for pid 31901
valgrind MPI wrappers 31901: Try MPIWRAP_DEBUG=help for possible options
repeated for every process in the group. If you do not see these, there is an build/installation problem of some kind.
The MPI functions to be wrapped are assumed to be in an ELF shared object with soname matching libmpi.so*.
This is known to be correct at least for Open MPI and Quadrics MPI, and can easily be changed if required.
4.9.2. Getting started
Compile your MPI application as usual, taking care to link it using the same mpicc that your Valgrind build was
configured with.
Use the following basic scheme to run your application on Valgrind with the wrappers engaged:
MPIWRAP_DEBUG=[wrapper-args] \
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so \
mpirun [mpirun-args] \
$prefix/bin/valgrind [valgrind-args] \
[application] [app-args]
As an alternative to LD_PRELOADing libmpiwrap-<platform>.so, you can simply link it to your application
if desired. This should not disturb native behaviour of your application in any way.
4.9.3. Controlling the wrapper library
Environment variable MPIWRAP_DEBUG is consulted at startup. The default behaviour is to print a starting banner
valgrind MPI wrappers 16386: Active for pid 16386
valgrind MPI wrappers 16386: Try MPIWRAP_DEBUG=help for possible options
and then be relatively quiet.
You can give a list of comma-separated options in MPIWRAP_DEBUG. These are
verbose: show entries/exits of all wrappers. Also show extra debugging info, such as the status of outstanding
MPI_Requests resulting from uncompleted MPI_Irecvs.
96
Memcheck: a memory error detector
quiet: opposite of verbose, only print anything when the wrappers want to report a detected programming
error, or in case of catastrophic failure of the wrappers.
warn: by default, functions which lack proper wrappers are not commented on, just silently ignored. This causes
a warning to be printed for each unwrapped function used, up to a maximum of three warnings per function.
strict: print an error message and abort the program if a function lacking a wrapper is used.
If you want to use Valgrind’s XML output facility (--xml=yes), you should pass quiet in MPIWRAP_DEBUG so
as to get rid of any extraneous printing from the wrappers.
4.9.4. Functions
All MPI2 functions except MPI_Wtick,MPI_Wtime and MPI_Pcontrol have wrappers. The first two are not
wrapped because they return a double, which Valgrind’s function-wrap mechanism cannot handle (but it could easily
be extended to do so). MPI_Pcontrol cannot be wrapped as it has variable arity: int MPI_Pcontrol(const
int level, ...)
Most functions are wrapped with a default wrapper which does nothing except complain or abort if it is called,
depending on settings in MPIWRAP_DEBUG listed above. The following functions have "real", do-something-useful
wrappers:
PMPI_Send PMPI_Bsend PMPI_Ssend PMPI_Rsend
PMPI_Recv PMPI_Get_count
PMPI_Isend PMPI_Ibsend PMPI_Issend PMPI_Irsend
PMPI_Irecv
PMPI_Wait PMPI_Waitall
PMPI_Test PMPI_Testall
PMPI_Iprobe PMPI_Probe
PMPI_Cancel
PMPI_Sendrecv
PMPI_Type_commit PMPI_Type_free
PMPI_Pack PMPI_Unpack
PMPI_Bcast PMPI_Gather PMPI_Scatter PMPI_Alltoall
PMPI_Reduce PMPI_Allreduce PMPI_Op_create
PMPI_Comm_create PMPI_Comm_dup PMPI_Comm_free PMPI_Comm_rank PMPI_Comm_size
PMPI_Error_string
PMPI_Init PMPI_Initialized PMPI_Finalize
A few functions such as PMPI_Address are listed as HAS_NO_WRAPPER. They have no wrapper at all as there is
nothing worth checking, and giving a no-op wrapper would reduce performance for no reason.
97
Memcheck: a memory error detector
Note that the wrapper library itself can itself generate large numbers of calls to the MPI implementa-
tion, especially when walking complex types. The most common functions called are PMPI_Extent,
PMPI_Type_get_envelope,PMPI_Type_get_contents, and PMPI_Type_free.
4.9.5. Types
MPI-1.1 structured types are supported, and walked exactly. The currently supported combin-
ers are MPI_COMBINER_NAMED,MPI_COMBINER_CONTIGUOUS,MPI_COMBINER_VECTOR,
MPI_COMBINER_HVECTOR MPI_COMBINER_INDEXED,MPI_COMBINER_HINDEXED and MPI_COMBINER_STRUCT.
This should cover all MPI-1.1 types. The mechanism (function walk_type) should extend easily to cover MPI2
combiners.
MPI defines some named structured types (MPI_FLOAT_INT,MPI_DOUBLE_INT,MPI_LONG_INT,MPI_2INT,
MPI_SHORT_INT,MPI_LONG_DOUBLE_INT) which are pairs of some basic type and a C int. Unfortunately the
MPI specification makes it impossible to look inside these types and see where the fields are. Therefore these
wrappers assume the types are laid out as struct { float val; int loc; } (for MPI_FLOAT_INT), etc,
and act accordingly. This appears to be correct at least for Open MPI 1.0.2 and for Quadrics MPI.
If strict is an option specified in MPIWRAP_DEBUG, the application will abort if an unhandled type is encountered.
Otherwise, the application will print a warning message and continue.
Some effort is made to mark/check memory ranges corresponding to arrays of values in a single pass. This is
important for performance since asking Valgrind to mark/check any range, no matter how small, carries quite a large
constant cost. This optimisation is applied to arrays of primitive types (double,float,int,long,long long,
short,char, and long double on platforms where sizeof(long double) == 8). For arrays of all other
types, the wrappers handle each element individually and so there can be a very large performance cost.
4.9.6. Writing new wrappers
For the most part the wrappers are straightforward. The only significant complexity arises with nonblocking receives.
The issue is that MPI_Irecv states the recv buffer and returns immediately, giving a handle (MPI_Request)
for the transaction. Later the user will have to poll for completion with MPI_Wait etc, and when the transaction
completes successfully, the wrappers have to paint the recv buffer. But the recv buffer details are not presented to
MPI_Wait -- only the handle is. The library therefore maintains a shadow table which associates uncompleted
MPI_Requests with the corresponding buffer address/count/type. When an operation completes, the table is
searched for the associated address/count/type info, and memory is marked accordingly.
Access to the table is guarded by a (POSIX pthreads) lock, so as to make the library thread-safe.
The table is allocated with malloc and never freed, so it will show up in leak checks.
Writing new wrappers should be fairly easy. The source file is mpi/libmpiwrap.c. If possible, find an existing
wrapper for a function of similar behaviour to the one you want to wrap, and use it as a starting point. The wrappers
are organised in sections in the same order as the MPI 1.1 spec, to aid navigation. When adding a wrapper, remember
to comment out the definition of the default wrapper in the long list of defaults at the bottom of the file (do not remove
it, just comment it out).
4.9.7. What to expect when using the wrappers
The wrappers should reduce Memcheck’s false-error rate on MPI applications. Because the wrapping is done at the
MPI interface, there will still potentially be a large number of errors reported in the MPI implementation below the
interface. The best you can do is try to suppress them.
98
Memcheck: a memory error detector
You may also find that the input-side (buffer length/definedness) checks find errors in your MPI use, for example
passing too short a buffer to MPI_Recv.
Functions which are not wrapped may increase the false error rate. A possible approach is to run with MPI_DEBUG
containing warn. This will show you functions which lack proper wrappers but which are nevertheless used. You
can then write wrappers for them.
A known source of potential false errors are the PMPI_Reduce family of functions, when using a custom (user-
defined) reduction function. In a reduction operation, each node notionally sends data to a "central point" which uses
the specified reduction function to merge the data items into a single item. Hence, in general, data is passed between
nodes and fed to the reduction function, but the wrapper library cannot mark the transferred data as initialised before
it is handed to the reduction function, because all that happens "inside" the PMPI_Reduce call. As a result you may
see false positives reported in your reduction function.
99
5. Cachegrind: a cache and
branch-prediction profiler
To use this tool, you must specify --tool=cachegrind on the Valgrind command line.
5.1. Overview
Cachegrind simulates how your program interacts with a machine’s cache hierarchy and (optionally) branch predictor.
It simulates a machine with independent first-level instruction and data caches (I1 and D1), backed by a unified
second-level cache (L2). This exactly matches the configuration of many modern machines.
However, some modern machines have three or four levels of cache. For these machines (in the cases where
Cachegrind can auto-detect the cache configuration) Cachegrind simulates the first-level and last-level caches. The
reason for this choice is that the last-level cache has the most influence on runtime, as it masks accesses to main
memory. Furthermore, the L1 caches often have low associativity, so simulating them can detect cases where the
code interacts badly with this cache (eg. traversing a matrix column-wise with the row length being a power of 2).
Therefore, Cachegrind always refers to the I1, D1 and LL (last-level) caches.
Cachegrind gathers the following statistics (abbreviations used for each statistic is given in parentheses):
I cache reads (Ir, which equals the number of instructions executed), I1 cache read misses (I1mr) and LL cache
instruction read misses (ILmr).
D cache reads (Dr, which equals the number of memory reads), D1 cache read misses (D1mr), and LL cache data
read misses (DLmr).
D cache writes (Dw, which equals the number of memory writes), D1 cache write misses (D1mw), and LL cache
data write misses (DLmw).
Conditional branches executed (Bc) and conditional branches mispredicted (Bcm).
Indirect branches executed (Bi) and indirect branches mispredicted (Bim).
Note that D1 total accesses is given by D1mr +D1mw, and that LL total accesses is given by ILmr +DLmr +DLmw.
These statistics are presented for the entire program and for each function in the program. You can also annotate each
line of source code in the program with the counts that were caused directly by it.
On a modern machine, an L1 miss will typically cost around 10 cycles, an LL miss can cost as much as 200 cycles,
and a mispredicted branch costs in the region of 10 to 30 cycles. Detailed cache and branch profiling can be very
useful for understanding how your program interacts with the machine and thus how to make it faster.
Also, since one instruction cache read is performed per instruction executed, you can find out how many instructions
are executed per line, which can be useful for traditional profiling.
5.2. Using Cachegrind, cg_annotate and
cg_merge
100
Cachegrind: a cache and branch-prediction profiler
First off, as for normal Valgrind use, you probably want to compile with debugging info (the -g option). But
by contrast with normal Valgrind use, you probably do want to turn optimisation on, since you should profile your
program as it will be normally run.
Then, you need to run Cachegrind itself to gather the profiling information, and then run cg_annotate to get a detailed
presentation of that information. As an optional intermediate step, you can use cg_merge to sum together the outputs
of multiple Cachegrind runs into a single file which you then use as the input for cg_annotate. Alternatively, you
can use cg_diff to difference the outputs of two Cachegrind runs into a single file which you then use as the input for
cg_annotate.
5.2.1. Running Cachegrind
To run Cachegrind on a program prog, run:
valgrind --tool=cachegrind prog
The program will execute (slowly). Upon completion, summary statistics that look like this will be printed:
==31751== I refs: 27,742,716
==31751== I1 misses: 276
==31751== LLi misses: 275
==31751== I1 miss rate: 0.0%
==31751== LLi miss rate: 0.0%
==31751==
==31751== D refs: 15,430,290 (10,955,517 rd + 4,474,773 wr)
==31751== D1 misses: 41,185 ( 21,905 rd + 19,280 wr)
==31751== LLd misses: 23,085 ( 3,987 rd + 19,098 wr)
==31751== D1 miss rate: 0.2% ( 0.1% + 0.4%)
==31751== LLd miss rate: 0.1% ( 0.0% + 0.4%)
==31751==
==31751== LL misses: 23,360 ( 4,262 rd + 19,098 wr)
==31751== LL miss rate: 0.0% ( 0.0% + 0.4%)
Cache accesses for instruction fetches are summarised first, giving the number of fetches made (this is the number of
instructions executed, which can be useful to know in its own right), the number of I1 misses, and the number of LL
instruction (LLi) misses.
Cache accesses for data follow. The information is similar to that of the instruction fetches, except that the values are
also shown split between reads and writes (note each row’s rd and wr values add up to the row’s total).
Combined instruction and data figures for the LL cache follow that. Note that the LL miss rate is computed relative
to the total number of memory accesses, not the number of L1 misses. I.e. it is (ILmr + DLmr + DLmw) /
(Ir + Dr + Dw) not (ILmr + DLmr + DLmw) / (I1mr + D1mr + D1mw)
Branch prediction statistics are not collected by default. To do so, add the option --branch-sim=yes.
5.2.2. Output File
As well as printing summary information, Cachegrind also writes more detailed profiling information to a file. By
default this file is named cachegrind.out.<pid> (where <pid> is the program’s process ID), but its name
101
Cachegrind: a cache and branch-prediction profiler
can be changed with the --cachegrind-out-file option. This file is human-readable, but is intended to be
interpreted by the accompanying program cg_annotate, described in the next section.
The default .<pid> suffix on the output file name serves two purposes. Firstly, it means you don’t have to rename
old log files that you don’t want to overwrite. Secondly, and more importantly, it allows correct profiling with the
--trace-children=yes option of programs that spawn child processes.
The output file can be big, many megabytes for large applications built with full debugging information.
5.2.3. Running cg_annotate
Before using cg_annotate, it is worth widening your window to be at least 120-characters wide if possible, as the
output lines can be quite long.
To get a function-by-function summary, run:
cg_annotate <filename>
on a Cachegrind output file.
5.2.4. The Output Preamble
The first part of the output looks like this:
--------------------------------------------------------------------------------
I1 cache: 65536 B, 64 B, 2-way associative
D1 cache: 65536 B, 64 B, 2-way associative
LL cache: 262144 B, 64 B, 8-way associative
Command: concord vg_to_ucode.c
Events recorded: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
Events shown: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
Event sort order: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
Threshold: 99%
Chosen for annotation:
Auto-annotation: off
This is a summary of the annotation options:
I1 cache, D1 cache, LL cache: cache configuration. So you know the configuration with which these results were
obtained.
Command: the command line invocation of the program under examination.
Events recorded: which events were recorded.
Events shown: the events shown, which is a subset of the events gathered. This can be adjusted with the --show
option.
102
Cachegrind: a cache and branch-prediction profiler
Event sort order: the sort order in which functions are shown. For example, in this case the functions are sorted
from highest Ir counts t