Stephen Almond, ICO: Prioritise privacy when adopting generative AI

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology.

According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be aware of the associated privacy risks.

Stephen Almond, the Executive Director of Regulatory Risk at the ICO, highlighted the importance of recognising the opportunities presented by generative AI while also understanding the potential risks.

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” says Almond.

“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”

Generative AI works by generating content based on extensive data collection from publicly accessible sources, including personal information. Existing laws already safeguard individuals’ rights, including privacy, and these regulations extend to emerging technologies such as generative AI.

In April, the ICO outlined eight key questions that organisations using or developing generative AI that processes personal data should be asking themselves. The regulatory body is committed to taking action against organisations that fail to comply with data protection laws.

Almond reaffirms the ICO’s stance, stating that they will assess whether businesses have effectively addressed privacy risks before implementing generative AI, and will take action if there is a potential for harm resulting from the misuse of personal data. He emphasises that businesses must not overlook the risks to individuals’ rights and freedoms during the rollout of generative AI.

“We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is a risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout,” explains Almond.

“Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different questions compared with one for a sexual health clinic, for instance.”

The ICO is committed to supporting UK businesses in their development and adoption of new technologies that prioritise privacy.

The recently updated Guidance on AI and Data Protection serves as a comprehensive resource for developers and users of generative AI, providing a roadmap for data protection compliance. Additionally, the ICO offers a risk toolkit to assist organisations in identifying and mitigating data protection risks associated with generative AI.

For innovators facing novel data protection challenges, the ICO provides advice through its Regulatory Sandbox and Innovation Advice service. To enhance their support, the ICO is piloting a Multi-Agency Advice Service in collaboration with the Digital Regulation Cooperation Forum, aiming to provide comprehensive guidance from multiple regulatory bodies to digital innovators.

While generative AI offers tremendous opportunities for businesses, the ICO emphasises the need to address privacy risks before widespread adoption. By understanding the implications, mitigating risks, and complying with data protection laws, organisations can ensure the responsible and ethical implementation of generative AI technologies.

(Image Credit: ICO)

Related: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Tags: , , , , , ,

View Comments
Leave a comment

Leave a Reply