There’s been an interesting flurry of social chatter and blog posts the past week or so about the “quality” of research. Ron Sellers wrote thoughtfully about the impact that active management of “The Field” can have on the quality of data collected. Reg Baker started a little firestorm with the client-side bloggers with the suggestion that clients “need to become smarter consumers“…ostensibly because they are asking for things without fully understanding what they’re asking for. Edward Appleton took issue with this a bit, as did I, but it started a little chat roulette about the relationship between how clients and vendors define quality, and how they go about achieving their quality goals.
People are now talking about why clients buy cheap research. Or why there can be such a huge gap between the quality of the sales pitch and the quality of the final report. This is not going to sound particularly inspired, but there are some really simple and obvious reasons for all of the above:
- Quality of work output = quality of materials * quality of craftsmen * quality of processes. If you want high quality output, you need great data, great analysts, and great processes and templates for getting the work done. When the quality of the sales pitch is greater than the quality of the report, it’s usually not that hard to figure out why.
- The value of a study is proportional to the value of its business impact. Clients pay more for work that has an incontrovertible impact on their business. If I’m building an early version of a market sizing model, I’m not going to spend tons of money to get super-accurate data. I only need data that’s good enough to help the business make a go/no-go decision on a particular market or category.
- You need great people to make research look great. If we all stop hiring people, developing the talent funnel, and keeping a strong industry-wide bench of players, we will all suffer from significant gaps…such as the current mid-level researcher gap.