Dr. Joel Orr
VP & Chief Visionary
Cyon Research Corporation

Joel.orr@cyonresearch.com
www.joelorr.com


I've been helping engineers buy software and hardware for over 30 years. After experimenting with careful benchmarks and other attempts to quantify and qualify products and vendors, I came to the conclusion that the very best approach to choosing among a variety of software-or hardware-systems is simply user interviews.

"Ask the man who owns one" was the famous slogan for Packard Motor Cars, first appearing in 1901. A simple and brilliant insight, it is perhaps even more applicable to engineering automation technology than it was to choosing an automobile.

When CAD seats averaged over $100,000 (in bigger dollars than we use today), companies would develop benchmark tasks of excruciating length and complexity to determine which system to buy. The thought behind the process was, "I need to perform certain kinds of activities. You, vendor, need to demonstrate to me that your system is capable of doing those things." A reasonable approach, on the surface.

The problem with this approach is that it implicitly encourages the wrong kinds of things:
  • User organizations developed hard problems, to "test the limits" of the systems. But the fact is that such problems rarely, if ever, appeared in their daily work.
  • Systems that can perform really tough tasks-for example, surface-surface intersections of odd shapes-may be perfectly miserable to use for mundane work. The converse is also true: Systems that offer great productivity in day-to-day activities may not do the "tough stuff" well.
  • Vendors assigned crack "demo jocks" to benchmarks, real virtuosos, whose profound understanding and flying fingers would exploit their systems' capabilities-and work around their deficiencies. A user would be unlikely to have such talented operators.
  • Tough benchmarks are defensible to management within the user organization. Their quantitative aspects assured the CAD selection committee that, even if the system bombed once installed in-house, the committee could not be accused of negligence.
  • The adversarial nature of the benchmark brought out the worst in everyone. Vendors were motivated to present things in a favorable light, even to the point of deception. When hundreds of thousands-in some cases, even millions-of dollars were riding on the benchmark's outcome, it is easy to understand the team at the (now-defunct) CAD company that had several "dumb" workstations secretly connected to a computer outside the benchmark room, while claiming to run them all from a single 16-bit minicomputer. (The fact came out when the computer in the demo room crashed, but several of the workstations continued working.)
  • A major contributor to successful implementation is the ongoing support of the vendor. Benchmarks did nothing to indicate how good such support might be.

That's why I concluded, early in my consulting career, that user interviews were far more important than benchmarks. You identify current users of systems you are considering, preferably ones who are in your industry, and go and spend time with them. Face-to-face visits are important; people will tell you things over a meal, once a measure of rapport is established, that they would not say over the phone.

I like to have a mix of users given by the vendor-usually, their "poster children"-and others I've found on my own. Interestingly, even the "poster children" frequently turn out to be openly critical of the vendor's shortcomings; and it is reasonable to assume that if the hand-picked user references are unhappy, there may be many more unhappy users.

It's nice to have a checklist of questions, but once you get into friendly conversation with a user, they will generally fully characterize their relationship with the vendor. So you needn't worry about "covering all the bases" in your list of questions.

I ask things like:
  • Has the vendor lived up to its promises? If not, why not?
  • Have your budgetary expectations been exceeded? If so, why?
  • Do promised updates arrive on time?
  • Do new versions of the software "break" older work?
  • Do frequent system crashes interfere with ongoing work?
  • How much training have you had? Do you need more? How good is it?
  • Do you deal with the same people all the time, or is there a lot of turnover on the vendor staff?
  • How responsive is the vendor in an emergency?
  • Does the vendor take responsibility for its own failings, or do they charge you no matter what?
  • Overall, do you feel like the vendor is "on your team"?

Today's engineering software platforms are all more or less adequate; it's a mature market, and the survivors have basically functional products. Issues that used to have to be resolved by benchmark can generally be settled by simply asking: Does the system do this or that?

But a prospective customer needs to consider that buying a CAD, PDM, PLM, or analysis system involves much more than just the price of the software. There's training, conversion, and down-time to consider. Even for a small installation, the investment is considerable. A good way to reduce risk is through user interviews.


Dr. Joel Orr can be reached at www.joelorr.com and joel.orr@cyonresearch.com..