Who: The Advertising Standards Authority (ASA)
Where: United Kingdom
When: 31 August 2023
Law stated as at: 14 September 2023
The ASA has published a statement on its regulation of the use of generative artificial intelligence (AI), highlighting how advertisers’ using these tools could potentially fall foul of the UK Code of Non-broadcast Advertising and Direct & Promotional Marketing (CAP Code).
The statement does not suggest the ASA plans to introduce any rules to regulate the use of generative AI in itself. It will continue to regulate ads on a media-neutral basis, based on how the consumer will interpret them rather than how they were made. However, the ASA has highlighted instances where how the ad was created could be relevant to its analysis of whether an ad complies with the CAP Code:
1) Where an ad uses AI-generated images to make efficacy claims.
The ASA highlights that use of AI-generated images could potentially be misleading if such images do not reflect the actual efficacy of the advertised product. This is similar to how images using photoshop or social media filters may also be misleading, for example for a beauty or cosmetic product.
2) Where AI models amplify biases leading to socially irresponsible ads.
AI models can amplify biases which are already present in the models’ training data. The ASA believes this could potentially lead to socially irresponsible ads. It notes that there have been examples of generative AI tools which portray bias, for example, idealised body standards or gender stereotypical roles. Overall, the ASA reminds advertisers that they have primary responsibility for their own ads no matter if an ad is entirely generated or distributed using automated methods. Although the ASA notes it is not aware of having ruled on an ad featuring AI-generated images to date, earlier this year, the ASA upheld a complaint against Stripe & Stare Ltd for an ad which was produced and distributed by an AI-powered tool which combined text and images provided by the advertiser.
Why this matters:
Advertisers should carefully consider the risks and implications of using generative AI models to create content, including imagery, for their ads. While generative AI models can valuably assist advertisers, its use needs to be undertaken carefully and responsibly to reduce the regulatory risks.
The nature and extent of the risks posed to a business will turn on the specific generative AI tool, the datasets that it has been trained on, the anticipated use case for that system, and the way in which the business plans to operationalise the generative AI tool (for instance, the controls that the business will put in place).
At a very general level, in addition to understanding the risks and implications arising from generative AI, advertisers should ensure that there is a “human in the loop” when using generative AI tools. This means taking care to thoroughly check all generated imagery prior to incorporation into external facing materials.
The use of generative AI models is a complex and nuanced area of the law – and the regulatory position is evolving at the pace, including regulations directly relevant to advertisers: the Competition and Markets Authority is investigating whether the existing regulatory and enforcement framework is fit for purpose, and has recently proposed a set of guiding principles for AI developers to consider. Advertisers should expect the market position – if not the regulatory position – to evolve rapidly, alongside the technology itself.