Associations Shaping the Future of AI in Market Research

Associations Shaping the Future of AI in Market Research—Opportunities, Ethics, and Regulation

Blog Contributors: Melanie Courtright, CEO at the Insights Association and Howard Fienberg, Senior VP, Advocacy, at the Insights Association

The Insights Association is a Founding Association Partner of MRII

During recent conversations with leaders at market research agencies and corporate insights departments, current and future use of AI in our work and is a persistent topic. Some leaders in the space fully believe that AI, and generative tools, will be as important an evolution as the migration to online. More powerful Generative AI applications can bring welcomed time, cost, and quality efficiencies, but can also enable more sophisticated and harder-to-detect fraud. 

As the representatives of the world’s leading insights and data analytics market, people turn to the Insights Association (IA) for answers and assurance. One thing we can tell you unequivocally: helping to navigate this evolution stands as a top priority for the Insights Association and our Board of Directors. I’m pleased to say that our fellow associations globally share this same sense of urgency. For the past several months we have been working closely with them in unprecedented coordination to ensure a complete and unified approach. Following are some details.

What Associations are Doing in the AI Area: It’s important to note that many of the concepts that should guide the use of AI in Insights already exist in the ethics codes of the leading market research associations: Transparency, Duty of Care, Fit for Purpose, Use of Data, and Privacy, are all part of our core code. So, we already are bound by those concepts in the work we do, including with AI. That said, AI brings some new considerations and applications. Therefore, your global associations are working on AI-specific guidelines, and some have already been published. Insights Association published a paper outlining the legal concerns and risks of AI, along with several recommendations for insights companies offering and/or using Generative AI. ESOMAR has a task force on AI, which they’ve kindly invited IA and other associations to join. Market Research Society (MRS) in the UK and the Global Research Business Network (GRBN) are coordinating a global standard. You will see more news and developments on this front from all of us very soon. 

Where Associations Stand on AI and Synthetic Sample:Associations strongly encourage evolution and innovation in our profession as it ensures our future. We balance that enthusiasm with a reminder of the risks and considerations, and encouragement to be thoughtful about them. Specific to AI, and within that synthetic sample, IA has been discussing the 6 Rs to understand and address when using the tools or developing new products: 

  1. Reason for Use: What new outcomes are we able to generate using AI, what problems are being solved, and what problems are being created? What are the tradeoffs? 
  2. Risk: Transparency to all stakeholders, legal, privacy, IP ownership, data provenance, and regulation considerations.
  3. Respondent Care: Are participants fully aware of how their data is being used, both now and in the future, and is the experience ensured to do them no harm? 
  4. Representation: Data fit, bias, and gaps. Who does the underlying data being used in AI represent, and not represent?
  5. Recency: Data age, and fit for predictions and modeling.
  6. Repeatability: Is the output created consistent and reliable for decision-making?

Government regulation of AI in the works: IA is working with U.S. state and federal policymakers to ensure that their approach to regulating AI does not strangle our industry’s innovation in the crib. The Federal Trade Commission (FTC), the insights industry’s top U.S. regulator, is certainly taking a tough stance, and Congress is toying with legislative approaches, including a recent dueling pair of Senate proposals:

  • The U.S. AI Act, which would restrict most AI uses, establish a new overarching regulator/enforcer, and punish violations with private lawsuits;
  • The AIRIA ACT, which would regulate the most potentially risky uses of AI and require transparency when providing content produced by generative AI.

More on the legal concerns with Generative AI: Insights companies and organizations need to be aware of the already-existing potential legal pitfalls in using this technology, including:

  1. the output of AI tools may not be eligible for copyright/patent protection;
  2.  is your use of Generative AI in violation of copyright and trade secrets laws in how it is being trained;
  3. are you sharing your (and your clients and data subjects’) proprietary information with these tools and is it contractually protected from misuse;
  4. are you minimizing bias at the input stage and in your algorithms;
  5. what transparency can you provide your data subjects about their information’s interaction and handling by the AI tool; and
  6. are you risking lawsuit by misrepresenting your tools, what they are and how they operate, both to your clients/partners and to your data subjects.

On behalf of IA and the organizations we partner with globally to ensure the viability and advancement of the insights profession, we encourage you to stay engaged and active. Contact us directly to learn how to get involved.

This information is not intended and should not be construed as or substituted for legal advice. It is provided for informational purposes only. It is advisable to consult with private counsel on the precise scope and interpretation of any laws/regulation/legislation and their impact on your particular business.

Share this post:
Share this post
Recent Posts
Categories

Subscribe to our Newsletter


By submitting this form, you are consenting to receive marketing emails from: . You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact