Things I Learned as an Analyst that I Wish I’d Known as a CMO
It might have been Geoffrey Moore who compared market research to sausage making – the more you know about it, the less appetizing it is. After working as a research analyst following a 25+ year career as a tech marketing executive, I can see his point. But I’d put some spin on it. Like healthy food, good research nourishes your decisions. But many studies are more like cheap fast food. My advice? Stay skeptical.
As the world’s complexity increases, we can’t become sponges who passively absorb what we encounter. Actively filtering information using good questioning and interpretation skills will increase decision quality. I led International Data Corporation’s (IDC), CMO Advisory practice for nine years. IDC is one of the most data-savvy companies in the market research industry. I am forever grateful for how my fellow analysts and the data teams at IDC took me under their wing when I came in as a business leader.
Here are five things I learned about research that I wish I had known when I ran marketing.
1) Averages hide meaningful variations.
Clients came to me looking for benchmarks. How much do tech companies spend on marketing? How many companies are adopting Practice A. We gave them what they asked for, but I worried when we published the ‘average’ because the average hid so much critical variation. University statistics courses teach about bell curves with their means and distributions. In business, I found averages to be useful tracking high-level directional trends. However, because business is complex, data about customers and practices isn’t always optimally presented in normal curves. Therefore, averages are riskier to use when you get to the level of individual company decisions. Even within the relatively narrow segment we surveyed (e.g., same function, same industry) the differences between sub-segments were remarkable. Small companies invest differently than large. Software companies invest differently than hardware. Different geographic regions trended differently. A deep dive into the details could yield insight that changed everything. Detailed custom analysis was much more valuable to my clients than averages.
2) Examine the demographics.
Before I was an analyst, I used to skip through boring demographic stats to get to the meaty headlines. Now I look at demographics first. Demographics, which tell you who answered the survey, help inform you about how much to trust study results. I recently read an annual survey on agile adoption published by a major research company. Since I had access to last year’s report, I compared the two to look for trends. I found the responder demographics between the two surveys to be dramatically different from one another, distinct and narrow in unique ways. Yet the survey promoted itself as a broad industry survey and projected trends. Other demographic problems include small sample size and biased sourcing. Getting good data is difficult and often expensive. Executives often must make decisions without great data. Balance availability and quality with risk. As one of my wise IDC colleagues used to say, “Don’t try to make the data do more than it can”.
3) Keep your eyes open for ‘that’s funny!’ data.
The most important phrase in research is not “Eureka, I found it!” but “Hmmm, that’s funny.” If you see weird, unexpected research results, take two actions.
First, don’t accept weird data at face value. Shining a spotlight on startling stats has become the prevailing media practice. As a decision-maker, note anything new, then seek corroboration from independent sources.
Second, don’t automatically dismiss weird data. Analysts sometimes reject weird data as being just outliers. Sometimes they are. However, weird data can signal more. Perhaps something went wrong in the survey, and it needs to be fixed. Weird data can also be the first alert that something new and interesting is happening. In complex systems like markets, interesting things can emerge from the “noise”. When combing through detailed data on a buyer survey, my colleague noticed how the outliers for a question we asked buyers about media didn’t seem random. We filtered the data through a new lens, by generation, and were blown away by the difference that made. Generational media preferences varied remarkably. The generational lens also uncovered trends related to trust. We would have missed these important insights if not for my perceptive colleague.
4) It matters how you ask a question.
A LinkedIn member recently asked why online polls receive relatively few responses even while getting a lot of comments. His network chimed in clearly – multiple choice questions often don’t offer answers the audience wants to choose. This is just one of myriad challenges a researcher faces about how to ask the right questions and how to ask questions right. Of all the things I learned at IDC, how to ask better questions might be the most transformative.
How you ask questions alters the answers you get. At IDC we randomized multiple choice answers because people tend to pick choices at the top of the list no matter what they are. Early questions can prime participants to answer later questions in certain ways. We frequently included a choice for “other” and if many respondents choose “other”, we assumed our choice list needed work. If we were not sure about how to ask a question, we would sometimes test. Get a few responses and see if the data came in a form we expected. We would ask similar questions in different ways within the same survey and look for variations in the answers. This technique yielded several useful insights.
5) Be humble about your own biases.
All humans have biases. (Read about the six most dangerous biases). You are also biased. When you create a survey, interpret data, or read a report, keep this in mind. Dialoging with colleagues and clients who had different points of view helped me see more objectively. Treasure people and research companies that challenge you. I knew a few people whom I dreaded showing work to because they would invariably find the holes that would cause me more work. But I sought them out anyway and their contributions made my work better. People in power and experts should pay special attention to biases. When we filtered data by roles, we found that everyone had a higher opinion of themselves and their contribution than others did of them. The most overconfident were CEOs and those in technical jobs.
As data floods our world, it’s incumbent on us to train ourselves to be more proficient in using it.