This article tells us about how internet research is not to be trusted.
The methodologies involved in all forms of marketing research always make me a little bit worried - there's very little hard measurement.
You might run back to the client with a graph showing a 20% uplift in purchase propensity. This sometimes can mean that after seeing your ad three more people out of twenty ticked the 'very likely to' box rather than the 'quite likely to'.
What does that actually mean?!
Marketing research numbers have always been more than slightly dodgy and a similar problem exists throughout the research community - we will only ever get a certain type of person to respond to the research. We need to be careful that we don't end up tailoring all our ad output to the tiny part of the population who will respond to the questionnaire.
What could be interesting is if focus groups also picked up cookie ids from DART \ Atlas. This might then allow us to do focus groups with the knowledge of what ads they've been exposed to on the internet.
Not sure how easy it would be to get hold of this data, but it would definitely be interesting.
Overall I think the way people see a brand shouldn't be reduced to numbers that can be compared with other brands. Brand recall and unprompted awareness are silly metrics to be using.
I think the way people see different companies and brands varies depending on the kind of relationship. For example, Zara and BP are completely different and the idea that any metric could be shared between them is a little bit hopeful.