Developing a Framework for Ethical AI in Advertising

IAB Canada has noted an increased interest in the implications of AI in advertising and the ripple effect of the specific GDPR AI regulations that have made their way across the pond influencing the thinking of Canadian policy makers. Here in Canada, we have seen proposed AI legislation in a revamped PIPEDA in Bill C-11, as well as in Ontario’s recent privacy consultation and more firmly in Quebec’s newly passed Bill 64 which dictates that specific notice must be given when rendering a decision based exclusively on ADM as well as providing the opportunity for individuals to request more information regarding how the decision was made, the factors and parameters that led to the decision and the right to have PI used to make the decision corrected.  With proposed new regulations like these emerging at a rapid pace it is critical for our industry to better understand both the impact of AI on citizens but also, some best practices to follow to ensure its responsible use. 

This week, IAB released a ground-breaking guide covering the subject of bias in the context of AI for marketing.  While the study points out that bias is generally introduced into AI systems unintentionally by humans, our ability to mitigate the risks can help companies do the right thing for their businesses and society.   

The guide provides an excellent starting point for companies to develop frameworks for better AI solutions and is considered mandatory reading for our industry. All stakeholders in our sector can benefit from a strong understanding of the potential implications as well as viable frameworks that can be implemented across the advertising value chain, IAB has done an exceptional job curating real-world experience by AI professionals to define key terminology and explore the roles and responsibilities of stakeholders: requestors, builders, end-users, compliance and legal teams, and consumers. Throughout four phases—awareness, exploration, development, and activation—the document explores the role of key stakeholders and their associated responsibilities as AI champions and arbiters of bias.  

Some stand out truths featured and discussed in the guide include:  

  1. To err is human. To err in a system is a choice to not audit.  
  2. AI is not inherently biased.  
  3. Unwanted outcomes can cascade.
  4. Knowledge is accepting.  

In the US, the FTC recently provided updated guidance, regarding its expectations for organizations using AI and has indicated that not unlike other jurisdictions like our own here in Canada, AI fairness will be one of its regulatory enforcement priorities for this year and beyond.  

  • Start with the right data sets: Validate, revalidate and ensure whether there are gaps. Ask questions before you start.  
  • Beware of discriminatory outcomes: Test your results.  
  • Protect your algorithm from unauthorized use.  
  • Embrace transparency and independent review: Tell consumers how their data is being used in your algorithm and conduct independent audits.  
  • Tell the truth about the data you use and the algorithm results: If you are denying something of value, explain why and explain the results of the algorithm.  
  • Do more good than harm: Ask if your AI model meets this standard.   

IAB Canada strongly recommends that the guide be studied and that members participate in discussions we are having in the coming months on integrating this useful content into our strategies here in Canada. If you would like to join the conversation please reach out to committees@iabcanada.com