Who: The Advertising Standards Authority (ASA)
Where: United Kingdom
When: 9 November 2023
Law stated as at: 18 January 2024
What happened:
The ASA is employing advanced artificial intelligence (AI) technology to stop the widespread occurrence of unlawful advertisements for prescription-only medicines across social media platforms. The ASA’s system leverages machine learning technology, specifically large language models (LLMs), fine-tuned on ASA data to identify advertising that unlawfully promotes prescription-only medicines to the public.
The ASA reported that it “frequently deals” with businesses promoting prescription-only treatments such as Botox, weight-loss injections and vitamin injections, especially on social media. While the ASA notes that it takes a “holistic approach” to dealing with such advertising – working with advertisers and platforms, as well as proactively monitoring social media – the large volumes of posts has historically presented challenges in enforcement.
The use of LLM technology to identify potentially unlawful advertising follows a pattern of technology use by the ASA: in 2020, the ASA implemented social media “listening tools” to search social media for posts matching specific criteria (which were then evaluated by ASA’s experts). However, a requirement for manual evaluations by an ASA specialist presented scalability challenges for the ASA; conversely, applying a stricter filter meant potentially overlooking many problematic posts.
The ASA is already reporting efficiency gains – stating that the use of AI-powered monitoring tools has enabled its experts to review content two to three times faster than the previous system.
Why this matters:
The ASA’s initiative is part of its wider ”active ad monitoring” system, which applies AI to effectively oversee online advertising on a larger scale. This initiative serves as an example of the significant impact and efficiency gains to be realised by AI-technology for enforcement bodies like the ASA. However, with increased efficiency comes the potential for error – because of the nature of LLMs, there is an inherent risk that generated results will be inaccurate or even completely invented. Such hallucinations are highly problematic in the context of an enforcement body with a mandate to protect consumers from harm. It is therefore crucial for any public function that relies on AI-powered technology to implement measures to mitigate the potential for inaccuracies and foster public trust in the system.