There has been a good deal of handwringing of late in the NewMR social media bubble about the Cambridge Analytica/Facebook imbroglio and its implications for MR. (If you don’t know what I’m talking about do some googling.) The general issue at play is one that the ESOMAR Professional Standards Committee (PSC) has been wrestling with for some time. I work as a consultant to the Committee, but I want to be clear at the outset that I do not speak for the PSC. This post is my own personal view.
To my eye the challenge we face is how to implement the longstanding ethical principle that people do not suffer adverse consequences as a result of having participated in research as we rely less on primary data collections (surveys) in favor of data that was collected for some other purpose but then used in research (big data). Historically, this principle has been about protecting research participants from unwanted marketing messages and it has been operationalized by refusing to share the identity of research participants with clients, except under very exceptional circumstances or with the informed consent of the participant. But the world of big data poses a whole new set of challenges, and things no longer are that simple.
The International Chamber of Commerce and ESOMAR wrestled with this issue when they revised their Code in 2016. They added a new article to the Code to cover what they term, secondary data, meaning “data collected for another purpose and subsequently used in research.” Article V – Use of Secondary Data reads:
When using secondary data that includes personal data researchers must ensure that:
a. The intended use is compatible with the purpose for which the data was originally collected
b. The data was not collected in violation of restrictions imposed by law, through deception, or in ways that were not apparent to or reasonably discernible and anticipated by the data subject.
c. The intended use was not specifically excluded in the privacy notice provided at the time of original collection
d. Any requests from individual data subjects that their data not be used for other purposes are honoured.
e. Use of the data will not result in harm to data subjects and there are measures in place to guard against such harm.
To my eye the three most difficult tests are a, b, and e. Passing them is not straightforward. The term “compatible purpose” has a distinct meaning and part of that is a consideration of the rights of data collectors, data subjects, and society as a whole. Perhaps another post on that at some point. And establishing the precise conditions under which the data was collected can involve a whole lot of digging into the provenance of specific data items, which is often unclear.
With respect to e, the Code defines harm as “tangible and material harm (such as physical injury or financial loss), intangible or moral harm (such as damage to reputation or goodwill), or excessive intrusion into private life, including unsolicited personally-targeted marketing messages.” The Future of Privacy Forum takes an even broader view and classifies harm into distinct categories that reflect the increasing use of automated decision-making systems. Both recognize that with the amount of information available and the potential for its misuse means that the bar has been raised when we think about “adverse consequences.”
One way to put this in more familiar terms is to think about the difference between segmentation and profiling. The former uses all kinds of data to define groups of people with common needs, interests, and priorities. Profiling (now dressed up as “programmatic behavioral targeting”), on the other hand, amasses data about individual data subjects with the intent to use that data to take direct, tailored action toward them as individuals and for a non-research purpose. Segmentation is research; profiling is not.
The specifics of how all this may have played out in the case of Cambridge Analytica and Facebook are unclear, at least to me. One hopes that it leads to some clarity on the part of researchers as to where their ethical responsibilities lie. The fear here is that research ethics have devolved to whatever is legal and only the threat of prosecution now restrains us. That, I think, will be a real shame and not a good thing for the long-term health of our industry.