AI pioneers turn whistleblowers and demand safeguards

OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI's "super alignment" efforts to ensure advanced AI systems remain aligned with human values. Leike's exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as "magical" at its Spring Update...

X now permits AI-generated adult content

Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk's leadership, which involved hosting adult content within specific communities.

"We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression,...

OpenAI takes steps to boost AI-generated content transparency

OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard's metadata into its generative AI models to increase transparency around generated content.

The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3...

OpenAI faces complaint over fictional outputs

European data protection advocacy group noyb has filed a complaint against OpenAI over the company's inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI's failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

"Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be...

UK and US sign pact to develop AI safety tests

The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments.

The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models,...

IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI

A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent 'job apocalypse', threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like...

UN passes first global AI resolution

The UN General Assembly has adopted a landmark resolution on AI, aiming to promote the safe and ethical development of AI technologies worldwide.

The resolution, co-sponsored by over 120 countries, was adopted unanimously by all 193 UN member states on 21 March. This marks the first time the UN has established global standards and guidelines for AI.

The eight-page resolution calls for the development of "safe, secure, and trustworthy" AI systems that respect human rights...

UK Home Secretary sounds alarm over deepfakes ahead of elections

Criminals and hostile state actors could hijack Britain's democratic process by deploying AI-generated "deepfakes" to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies. 

Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK but globally. He warned that malign actors working on behalf of nations like Russia and Iran could generate...

Google pledges to fix Gemini’s inaccurate and biased image generation

Google's Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.

The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.

Google Gemini Image generation model receives criticism for being 'Woke'....

UK announces over £100M to support ‘agile’ AI regulation

The UK government has announced over £100 million in new funding to support an "agile" approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education. 

The investment comes at a vital time, as research from Thoughtworks shows 91% of British people argue that government regulations must do more to hold businesses accountable for their AI systems. The...