Who: Advertising Association
Where: United Kingdom
When: 5 February 2026
Law stated as at: 6 March 2026
What happened
The Advertising Association (AA) has published a new voluntary “Best Practice Guide for the Responsible Use of Generative AI in Advertising“, developed under the auspices of the UK government and industry-led Online Advertising Taskforce. It builds on and operationalises the Institute of Practitioners in Advertising (IPA) and Incorporated Society of British Advertisers (ISBA) principles for ethical AI use in advertising published in 2023.
The guide considers UK legal and regulatory frameworks (including the UK GDPR, the Data Protection Act 2018, the Equality Act and the Digital Markets, Competition and Consumers Act 2024), as well as the advertising codes and guidance from the Information Commissioner’s Office and the Advertising Standards Authority, giving a practical interpretation of their application in the generative artificial intelligence (AI) context. The AA says that while the guide has been developed for the UK market, its principles are sufficiently flexible to accommodate international interpretations and applications.
The guide is focused on eight core principles:
- Transparency: disclosure of AI-generated or AI-altered advertising content should be determined using a risk-based approach that prioritises the prevention of consumer harm.
- Responsible use of data: personal data used for generative AI applications, including model training, algorithmic targeting and personalisation, should comply with data protection law and respect individuals’ privacy rights.
- Preventing bias and ensuring fairness: generative AI systems should be designed, deployed and monitored to prevent discrimination against, and ensure fair treatment of, all individuals and groups.
- Human oversight and accountability: AI-generated advertising content should be subject to appropriate human oversight before publication, with the level of oversight proportionate to the potential for consumer harm.
- Promoting societal wellbeing: generative AI should not be used to create, distribute or amplify harmful, misleading or exploitative advertising content. Where possible, AI should be deployed to enhance consumer protection and advertising standards.
- Brand safety: organisations should assess and mitigate safety and suitability risks arising from the use of AI-generated content and AI-driven ad placement to ensure that generative AI systems align with brand values and safety standards.
- Environmental considerations: when selecting generative AI tools and approaches, organisations should consider the environmental implications alongside their business objectives and favour energy-efficient options where practical.
- Monitoring and evaluation: generative AI systems, once deployed, should be subject to continuous monitoring to detect performance degradation, bias drift, compliance failures or other issues that may require intervention.
Why this matters
The AA guide provides a regulator‑aligned benchmark for what “responsible” use of generative AI in advertising should look like in the UK, and can be used to inform businesses’ own internal policies and guardrails.





