We have grouped the testimonials into product groupings rather than individual products.
I just installed and used Lua Coverage Validator for the first time. LCV is very impressive. I’ll be sending feedback as I use this. I did not try the tutorial, and that’s on my to-do list. Great product from what I touched on the first time. It identified one whole area of code missed by the tests, and that was just my first time watching it run.
1. Support, support and more support – I have in 10+ years of dealing with many software suppliers both bigger and smaller than Software Verify, NEVER experienced this level of fantastic support. If more commercial companies followed your example the world would be a much better place.
2. Price vs Benefits vs Competitor Offerings. Even in the non-Lua environments, where there is more competition, we could not find any other offering at a similar price, and still get the features we required. Software Verify are actually providing a better product and support level than any other, and doing it for a much lower price than expected in this industry.
3. The software actually works!! From both the command line and from the GUI! And it is highly configurable from either interface – so much so that we always get the coverage results we seek from exactly the ever-moving target environment we require them from. I am not saying we could not get similar results from other similar software (for the non-Lua evaluations we conducted), but it seems like nobody else delivers *all* of the configurability one needs in one product, in a usable format.
4. Best of all – we’re working in multiple platforms and languages at SunSpace. It’s really great to learn all the ins-and-outs of the Lua Coverage Validator – and be able to simply switch to C/C++ Coverage Validator without much learning curve, or even to Lua Performance Validator. Once I learned 1 GUI, and 1 CLI, I was extremely productive in all your other tools. I’ve even integrated results from different tools, run over different software, into various much less detailed summary reports because all your output is in a standardised format. This has made me extremely productive!
5. Software Verify has been in existence for 6 years. During beta and evaluation periods, we were constantly impressed at how quickly new features we requested and fixes for bugs we reported were released.
Thanks, this version works fine. Reran my big coverage run and it took 8.6 hours to run, very impressive considering it takes Rational Coverage 30 hrs. to do the same thing.
The source code is ~800,000 lines. I am running ~150 runs of the test driver with different input specifications.
Today one of the guys noticed that GDI was behaving strangely, a quick look at TaskManager showed that GDI handles were railed at just under 10,000. Sounds like a job for Memory Validator I thought to myself!.
I fired it up on my machine, configured it to collect only GDI handle information, set a watchpoint, played with the GUI a bit, and set another. Quick visit to the leak detect menu and sure enough it took us straight to the problem line of code!. Calling GetWindowDC() in a method call isn’t he smartest thing to do!
GetMaxLabelsSize(m_spCvarEnum,
GetWindowDC(),
&maxLabelWidth,
&maxLabelHeight);
There were some smiling faces around the lab… oh, and the entire process took 5 minutes.
Congratulations guys, this tool is really starting to look nice!
Once again, great product, found a whole mass of leaks I was not aware of (and some that I was)!
You’re more than welcome! I had looked at several memory debugging packages, but most of these required a lot of work to get up and running. Memory Validator on the other hand was easy – no changes to my code, up and running and finding leaks and errors in minutes.
First off, congratulations on this great product of yours. With all these code reviews, I only wish I could use your products more often.
I used your API to embed MV calls into my application. I used the API to generate watermarks before each transaction in QA’s test. This is an awesome feature!
The Memory view showed the allocations along with the watermarks allowing me to see what has happening during each transaction. Again…good stuff.
The Memory Validator folks (softwareverify.com) were very courteous and helpful. Their tech support quickly jumped on the two bugs I reported. As of today, these issues have been resolved and I can recommend Memory Validator as a good balance between performance and strictness for use with omniorb.
With regard to finding bugs we do have a rather mega app here (last count 4 million lines of code) and we have given up on using bounds checker due to the slowness. Thus the great thing about Memory Validator (for us) is that we can attach (inject) to an existing running process and by switching data collection on only when we need it, has allowed me to track a 2MB memory leak per pen press. So that is a tick.
You are exceeding customer expectations!
I have been using evaluation versions of Memory and Performance validator for the past month. As an experienced user of the Numega products I have to say I’m quite impressed with your programs. The fact that I don’t have to do a special re-compile of my program is a major advantage over the likes of Boundschecker.
We have used memory validator extensively and continue to do so. The value addition for us has been tremendous. the sheer simplicity of usage (say not depending on “instrumented builds”, the ease of attaching to a service/process), the crispness and accuracy of output has made the process of debugging so much more simpler.
I downloaded Memory Validator on Friday. In 15 minutes it solved problems which IBM’s Purify was crushing and hanging on for a week. I purchased license next day. Your product probably has bugs and problems, but it is amazing compare to Purify. If this new product is as good and it can replace IBM’s Quantify I am switching from Rational Software to yours without a doubt.
In case you’re interested. I’ve already successfully profiled a small bit of an application… and discovered some as yet undiscovered leaks. This is very positive considering the amount of time I’ve spent messing around with Rational’s Purify with no results. I’ll need to spend a bit of time evaluating the product with the PeopleSoft tech stack but so far I’m impressed. Nice product.
As I mentioned before, I was very pleased with this product and look forward to using in the future.
I’m not getting much time these days to offer as much feedback as I would like, but I’m making good use of Memory Validator and Coverage Validator. These two tools have become a staple in my toolbox. Still looking forward to a new Thread Validator, nobody else (that I know of) can do a good job of this.
I’ve evaluated MV few weeks ago after reading an article on CodeProject and I was very happy with the product.
Especially the default instrumentation for memory leak detection was amazingly fast. I recommended it to our group as a replacement for Purify.
BTW I love this product. It is the best debugging tool I’ve seen or used and the price is great for the level of functionality.
By the way. I like the product, it has helped me find what I was looking for already, so you’ll have a new customer in a day or two.
When we asked Winston for permission to quote the above, he replied…
…and I would add that your update service is outstanding as well. I would recommend your product to anyone that writes code that has to work. The funny thing about your product? The less someone knows about it, the more likely they are to need it.
Keep up the good work. You make the world a better place.
I am rather pleased with the results of my MV tests and with your tech support and have submitted your product to our Technical Director for purchasing.
Great job! Thanks for working so hard on this. I will try out the new version when it is ready. Thanks again.
Thanks, I think it is a great tool.
Thanks for the last two support emails, I appreciate the time you take writing them.
I’m still evaluating and working on an application of mine which had some gross memory leaks.
p.s. great product, putting in a PO for first thing tomorrow.
I have been very pleased with Memory Validator. I hardly ever user Purify anymore.
Now, that IS impressive! Talk about customer service! Your tool has been very impressive thus far, making our previous purchases of DevPartner, HeapAgent and Purify a waste. Our application uses so much memory (250MB at startup, 1.4GB during full load), the other tools simply couldn’t handle it. The ability to start and stop data collection is key because of our application’s enormous initialization routine.
By the way, the memory validator is extremely useful to me. It outperforms any other equivalent software I tested (Rational Purify just stop a work unit before completion of the calculation making it unuseable to report any leak or any statistics on the memory usage !!, Boundschecker is too slow to be used on our library). So, finding your software is an extremely useful addition to my developer toolbox and I would recommend it to any other developer looking to fix memory leaks or improve the memory usage pattern of his software.
I would also like to complement you on your improvements to Memory Validator. I’ve been able to run programs that produce ~10 million events with no out of memory issues and they run with very little slowdown.
When we asked Tom for permission to quote the above, he replied…
Actually with the latest versions I’ve scaled up quite a bit. I now validate our servers that run for days with billions of memory events and it works like a charm.
This is getting to be one awesome tool. No one should be without it when tracking memory issues! It just keeps getting better and better.
I have written an article for the soon to go public infoq website (www.infoq.com).
In this article, I’ve made honorable mention of RPVL.
If you have a windows machine around (or a dual boot Intel Mac), I suggest to evaluate Ruby Performance Validator (RPVL) by Software Verify Ltd. I have found it to be of immense value for my Rails performance work that went into the core of Rails, especially after SVL implemented a call graph feature http://railsexpress.de/~skaes/callgraph.pdf (link no long resolves, 2016) that I suggested on top of the already existing hot spot view. As far as I know, it’s the only tool for Ruby application performance analysis on the market right now. Railsbench has built in support for RPVL, which makes it a snap to run benchmarks defined for railsbench under RPVL.
Your products are excellent in-house tools that I will continue to use frequently.
PV is easy to use (I don’t really want to have seperate builds to get profiling figures) and presents the results in a very user-friendly way. Being able to show what functions are taking the most time, or which are called the most, enables me to quickly check if they can be tuned to shave a few % more off an application’s runtime. It’s certainly helped me tune the performance of a CAD application we use internally. Running performance and memory allocation checks with PV and MV gives another view into how your code works and really do make a difference.
I have been using evaluation versions of Memory and Performance validator for the past month. As an experienced user of the Numega products I have to say I’m quite impressed with your programs. The fact that I don’t have to do a special re-compile of my program is a major advantage over the likes of Boundschecker.
I have not looked at the tutorial yet–I sort of jumped right it. Everything seems intuitive enougth to get up and running. Yes, I really like this program so far. I like being able to run a debug program externally instead of having to do everything from Visual Studio, and your interface is better than the clunky DevPartner interface.
I had tried this one profiler from AMD a while back that sort of worked like yours, but it didn’t show any useful info 🙁 It gave a nice breakdown of all the function calls, but didn’t give you any useful stats. Anyway, I digress–I really like this program so far.
Performance Validator is gprof on steroids, with much better organized results.
First of all the product is working well, first profiler I’ve ever found that can work with our application Zimbra, (www.zimbra.com). It’s a BIG app, about 135k lines of JS code, 2.5MB uncompressed.
Indeed. 🙂 By using your product, I was able to cut the start up time by 33% and have some ideas for some other ways to further optimize the largest startup time-sink left (we had not known the culprit previously, and I doubt anyone would’ve guessed that function would even be a problem). The tool has been very helpful.
LOVE the instant callstack feature. very cool, lots of “wow” factor. I often want to peruse and answer the “what’s it doing right now” question in-depth – it’d be nice if there was a “freeze execution NOW” button that would pause all threads, let me examine the stack, and then continue.
I read through the tutorial briefly, but I’m the kind of person who likes to just dive right in. It wasn’t obvious to me whether there was a way for PPV to also be able to tell me about the stack in my custom C++ python extension modules.
OK, so I got through my first profiling session, was all excited to see the data it collect, so I hit the shiny red button for “Stop Collecting Data”. This left my program running. So next I tried “Stop Application” which stopped my application correctly. Next, I wanted to look at all my new statistics – but alas, there are none in any of the tabs. What did I do wrong?
I ran the product again, and this time I figured out that I could browse my stats while the program is still running. That’s pretty awesome. I really like the integrated call graph (with source code) browser. Kudos.
A tiny nit: In the “Statistics” tab, some of the column names are too long for the column width. I had become used to the really nice tooltips elsewhere in the program and would have liked them here too.
Ok, those are the notes from “my first session” – seems like a really neat tool. Very slick, stable, and useful!
Y’all have put a lot of work into Performance Validator. I was just looking it over. Wow! It’ll take some time to learn all the features and what all the columns mean so bear with me. Injection took a little while (running bare minimum for XP [a P3 550] – can’t afford P4 yet), but the overall performance of my application was not noticeably hindered.
Great job!
From July 14. to 20. we participated in the RoboCup World Championship in Suzhou, China with our humanoid robots. Thanks to your software we were able to identify and remove some bottlenecks in our application. Due to code and algorithm improvements based on the analysis with Performance Validator we could raise the number of processed frames per second from about 4-5 to 15-17 which make a huge difference in our highly dynamical environment.
Performance Validator has become an irreplaceable tool for our project!
Thanks for your attention. Using HPjmeter just makes me appreciate Java Performance Validator more!
My colleague and I (a fellow Brit) used your Thread Validator application to solve a suspected deadlock issue. With some playing around we managed to isolate the area in which the error was occurring.
It took us:
4hrs to answer: “What happens when I do this?”
4hrs of troubleshooting
2hrs of examining the suspected code block
As it turned out, the problem wasn’t related to a deadlock as previously thought. It was a producer/consumer issue, whereby, the consumer went a stray and stopped consuming data.
Thread Validator, what can I say. When it comes to tracking down concurrency problems it wins hands down.
It’s been a while since I have written you with any input regarding Thread Validator. I thought I’d let you know that I used it just recently to help track down a deadlock that I was experiencing. I was quite impressed with the speed at which the program launches executables. This is MUCH improved over earlier versions of the software. I’m not sure what you did, but thanks for doing it.
Fantastic! I’m very impressed with your responsiveness.
VM Validator was great. Too cool. In 2 minutes you’ve just confirmed our hypothesis about why our software wasn’t working.
We have a problem where we are memory mapping a 650MB flat file. We find that when we open the file, close it, reopen it, close it, and open it a 3rd time, we get an out of memory error. One of our engineers suggested that the problem was to do with fragementation in the virtual address space, as there was plenty of total VM to make the allocation. Your program confirmed this very very quickly, as graphically we could see that the allocation was taking place towards the start of the VM, and the second allocation in the middle, with little bits of reserved/committed memory occupying other space. It’s a real pleasure to use something that is works so easily and gives such good feedback.