There are many reasons why your program may run slowly when used with Memory Validator, but they largely boil down to two types of problem:
Collecting too much unwanted data
There are many options to enable you to turn off data collection for data that is not important. some of which are turned off by default.
Turning off unwanted options prevents Memory Validator from spending time examining data you don't want collected:
•If not trying to isolate memory corruptions, turn off buffer checking
•If not trying to isolate uninitialized data, turn off uninitialized data detection
•If not trying to detect handle leaks, turn off collection of all handle related hooks
•If not trying to detect leaks in GlobalAlloc, LocalAlloc and HeapAlloc, turn off the matching memory hooks
•If not trying to detect CRT leaks, turn off CRT leaks
•If you don't need complete callstacks, collect only the part of the callstack that is interesting to you
Depending how deep your programs callstacks get, this can have quite a dramatic impact on performance
Collecting data in a tight loop
If your program is still running slowly, it may well be because it's allocating many blocks of memory in a tight loop.
When this happens, Memory Validator gets swamped with the sheer volume and rate of data it needs to track, and symbols (for the callstack) to resolve.
When the program exits the tight loop, the program performance will return to more normal speeds.
Often this is a sign that the target program could be improved by redesigning its memory allocation strategy.
Examining the statistics on the objects view will give you an insight into the number and frequency of allocations being made.
|
This depends largely on the target program, but some generalisations below are based on a variety of programs between 10,000, and 2,000,000 lines of code.
In order of greatest performance impact first:
•Buffer overrun detection
If used, then this can have quite a big hit, but only If your program uses the C (and Win32 shell) string functions.
For example a lot of string processing during startup can slow things down until the program is ready.
However, buffer overrun detection is probably not used that much.
•Uninitialized data detection
By its very nature this can have a high overhead, but is dependent upon the data it is examining.
•Memory Allocation tracking (CRT, Win32 heap etc)
The memory allocation tracking has a low overhead unless there are large numbers of allocations in very tight loops.
•COM object tracking •Handle tracking
The handle tracking functions produce very little performance overhead even on very large programs.
We don't give a suite of % impact figures for each feature as they can be misleading, but in some typical examples, we found
•with all options enabled, a program launched and ran in 90 seconds •with the uninitialized data and buffer tracking disabled it took 30 seconds •a competitor application to Memory Validator took 40 minutes and then usually failed! At the end of the day, the performance change will be always in relation to the data generated by the target program, and no two programs are alike.
Even the size of the program is not a great indicator: for example we tested a 2,000,000 line CAD program and a 300,000 line web authoring program.
The larger program started up with Memory Validator in much less time simply due the nature of the work each of the programs was doing during startup.
Our suggestion:
If in doubt about the performance impact of Memory Validator we suggest you simply try it on your product and see.
We hope you'll be favourably impressed compared to our competitors!
|