Society Archives - AI News https://www.artificialintelligence-news.com/tag/society/ Artificial Intelligence News Thu, 06 Jun 2024 15:39:56 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Society Archives - AI News https://www.artificialintelligence-news.com/tag/society/ 32 32 AI pioneers turn whistleblowers and demand safeguards https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/ https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/#respond Thu, 06 Jun 2024 15:39:54 +0000 https://www.artificialintelligence-news.com/?p=14962 OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came... Read more »

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

  1. That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.
  2. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.
  3. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.
  4. That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

See also: OpenAI disrupts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/feed/ 0
X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
OpenAI takes steps to boost AI-generated content transparency https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/ https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/#respond Wed, 08 May 2024 14:12:21 +0000 https://www.artificialintelligence-news.com/?p=14784 OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content. The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or... Read more »

The post OpenAI takes steps to boost AI-generated content transparency appeared first on AI News.

]]>
OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content.

The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3 model output in ChatGPT and the OpenAI API. The metadata will be integrated into OpenAI’s upcoming video generation model Sora when launched more broadly.

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI explained.

The move comes amid growing concerns about the potential for AI-generated content to mislead voters ahead of major elections in the US, UK, and other countries this year. Authenticating AI-created media could help combat deepfakes and other manipulated content aimed at disinformation campaigns.

While technical measures help, OpenAI acknowledges that enabling content authenticity in practice requires collective action from platforms, creators, and content handlers to retain metadata for end consumers.

In addition to C2PA integration, OpenAI is developing new provenance methods like tamper-resistant watermarking for audio and image detection classifiers to identify AI-generated visuals.

OpenAI has opened applications for access to its DALL-E 3 image detection classifier through its Researcher Access Program. The tool predicts the likelihood an image originated from one of OpenAI’s models.

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyses its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” the company said.

Internal testing shows high accuracy distinguishing non-AI images from DALL-E 3 visuals, with around 98% of DALL-E images correctly identified and less than 0.5% of non-AI images incorrectly flagged. However, the classifier struggles more to differentiate between images produced by DALL-E and other generative AI models.

OpenAI has also incorporated watermarking into its Voice Engine custom voice model, currently in limited preview.

The company believes increased adoption of provenance standards will lead to metadata accompanying content through its full lifecycle to fill “a crucial gap in digital content authenticity practices.”

OpenAI is joining Microsoft to launch a $2 million societal resilience fund to support AI education and understanding, including through AARP, International IDEA, and the Partnership on AI.

“While technical solutions like the above give us active tools for our defences, effectively enabling content authenticity in practice will require collective action,” OpenAI states.

“Our efforts around provenance are just one part of a broader industry effort – many of our peer research labs and generative AI companies are also advancing research in this area. We commend these endeavours—the industry must collaborate and share insights to enhance our understanding and continue to promote transparency online.”

(Photo by Marc Sendra Martorell)

See also: Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI takes steps to boost AI-generated content transparency appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/08/openai-steps-boost-ai-generated-content-transparency/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
UK and US sign pact to develop AI safety tests https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/ https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/#respond Tue, 02 Apr 2024 10:17:09 +0000 https://www.artificialintelligence-news.com/?p=14628 The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments. The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the... Read more »

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments.

The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models, systems, and agents.

Under the deal, the UK’s new AI Safety Institute and the upcoming US organisation will exchange research expertise with the aim of mitigating AI risks, including how to independently evaluate private AI models from companies such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” stated Donelan. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The partnership follows through on commitments made at the AI Safety Summit hosted in the UK last November. The institutes plan to build a common approach to AI safety testing and share capabilities to tackle risks effectively. They intend to conduct at least one joint public testing exercise on an openly accessible AI model and explore personnel exchanges.

Raimondo emphasised the significance of the collaboration, stating: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”

Both governments recognise AI’s rapid development and the urgent need for a shared global approach to safety that can keep pace with emerging risks. The partnership takes effect immediately, allowing seamless cooperation between the organisations.

“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future,” added Raimondo.

In addition to joint testing and capability sharing, the UK and US will exchange vital information about AI model capabilities, risks, and fundamental technical research. This aims to underpin a common scientific foundation for AI safety testing that can be adopted by researchers worldwide.

Despite the focus on risk, Donelan insisted the UK has no plans to regulate AI more broadly in the short term. In contrast, President Joe Biden has taken a stricter position on AI models that threaten national security, and the EU AI Act has adopted tougher regulations.

Industry experts welcomed the collaboration as essential for promoting trust and safety in AI development and adoption across sectors like marketing, finance, and customer service.

“Ensuring AI’s development and use are governed by trust and safety is paramount,” said Ramprakash Ramamoorthy of Zoho. “Taking safeguards to protect training data mitigates risks and bolsters confidence among those deploying AI solutions.”

Dr Henry Balani of Encompass added: “Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration, and supporting innovation in a crucial, advancing area of technology.”

(Photo by Art Lasovsky)

See also: IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/feed/ 0
IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/ https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/#respond Wed, 27 Mar 2024 10:37:59 +0000 https://www.artificialintelligence-news.com/?p=14619 A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted. The report identifies two key stages of generative... Read more »

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like database management and organisational tasks like scheduling are most at risk. 

However, in a potential second wave, AI could handle a staggering 59 percent of tasks—impacting higher-earning jobs and non-routine cognitive work like creating databases.

Bhargav Srinivasa Desikan, Senior Research Fellow at IPPR, said: “We could see jobs such as copywriters, graphic designers, and personal assistants roles being heavily affected by AI. The question is how we can steer technological change in a way that allows for novel job opportunities, increased productivity, and economic benefits for all.”

“We are at a sliding doors moment, and policy makers urgently need to develop a strategy to make sure our labour market adapts to the 21st century, without leaving millions behind. It is crucial that all workers benefit from these technological advancements, and not just the big tech corporations.”

IPPR modelled three scenarios for the second wave’s impact:

  • Worst case: 7.9 million jobs lost with no GDP gains
  • Central case: 4.4 million jobs lost but 6.3 percent GDP growth (£144bn/year) 
  • Best case: No jobs lost and 13 percent GDP boost (£306bn/year) from augmenting at-risk jobs

IPPR warns the worst-case displacement is possible without government intervention, urging a “job-centric” AI strategy with fiscal incentives, regulation ensuring human oversight, and support for green jobs less exposed to automation.

The analysis underscores the disproportionate impact on certain demographics, with women and young people bearing the brunt of job displacement. Entry-level positions, predominantly occupied by these groups, face the gravest jeopardy as AI encroaches on roles such as secretarial and customer service positions.

Carsten Jung, Senior Economist at IPPR, said: “History shows that technological transition can be a boon if well managed, or can end in disruption if left to unfold without controls. Indeed, some occupations could be hard hit by generative AI, starting with back office jobs.

“But technology isn’t destiny and a jobs apocalypse is not inevitable – government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. If they don’t act soon, it may be too late.”

A full copy of the report can be found here (PDF)

(Photo by Cullan Smith)

See also: Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/feed/ 0
UN passes first global AI resolution https://www.artificialintelligence-news.com/2024/03/22/un-passes-first-global-ai-resolution/ https://www.artificialintelligence-news.com/2024/03/22/un-passes-first-global-ai-resolution/#respond Fri, 22 Mar 2024 16:52:30 +0000 https://www.artificialintelligence-news.com/?p=14598 The UN General Assembly has adopted a landmark resolution on AI, aiming to promote the safe and ethical development of AI technologies worldwide. The resolution, co-sponsored by over 120 countries, was adopted unanimously by all 193 UN member states on 21 March. This marks the first time the UN has established global standards and guidelines... Read more »

The post UN passes first global AI resolution appeared first on AI News.

]]>
The UN General Assembly has adopted a landmark resolution on AI, aiming to promote the safe and ethical development of AI technologies worldwide.

The resolution, co-sponsored by over 120 countries, was adopted unanimously by all 193 UN member states on 21 March. This marks the first time the UN has established global standards and guidelines for AI.

The eight-page resolution calls for the development of “safe, secure, and trustworthy” AI systems that respect human rights and fundamental freedoms. It urges member states and stakeholders to refrain from deploying AI inconsistent with international human rights laws.

Key aspects of the resolution include:

  • Raising public awareness about AI’s benefits and risks
  • Strengthening investments and capabilities in AI research and development  
  • Safeguarding privacy and ensuring transparency in AI systems
  • Addressing diversity and bias issues in AI datasets and algorithms

The resolution also encourages governments to develop national policies, safeguards, and standards for ethical AI development and use. It calls on UN agencies to provide technical assistance to countries in need.

“The resolution adopted today lays out a comprehensive vision for how countries should respond to the opportunities and challenges of AI,” said Jake Sullivan, US National Security Advisor.

“It lays out a path for international cooperation on AI, including to promote equitable access, take steps to manage the risks of AI, protect privacy, guard against misuse, prevent exacerbated bias and discrimination.”

Growing international efforts to regulate AI  

The UN resolution follows several international efforts to regulate the rapidly growing AI industry over ethics and security concerns.

The European Union recently approved the AI Act to set risk-based rules for AI across the 27-nation bloc. Investigations into potential antitrust issues around AI have also been launched against major tech companies.

In the US, President Biden signed an executive order last year initiating a national AI strategy with a focus on safety and security.

As AI capabilities advance, the UN resolution signals a global commitment to ensure the technology’s development aligns with ethical principles and benefits humanity as a whole.

“Developed in consultation with civil society and private sector experts, the resolution squarely addresses the priorities of many developing countries, such as encouraging AI capacity building and harnessing the technology to advance sustainable development,” explained Sullivan.

“Critically, the resolution makes clear that protecting human rights and fundamental freedoms must be central to the development and use of AI systems.”

The full text of the UN resolution can be found here.

(Photo by Ilyass SEDDOUG)

See also: NVIDIA unveils Blackwell architecture to power next GenAI wave 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UN passes first global AI resolution appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/22/un-passes-first-global-ai-resolution/feed/ 0
UK Home Secretary sounds alarm over deepfakes ahead of elections https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/ https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/#respond Mon, 26 Feb 2024 16:46:48 +0000 https://www.artificialintelligence-news.com/?p=14448 Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies.  Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK... Read more »

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies. 

Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK but globally. He warned that malign actors working on behalf of nations like Russia and Iran could generate thousands of highly realistic deepfake images and videos to disrupt the democratic process.

“Increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere,” Cleverly told the newspaper. “The era of deepfake and AI-generated content to mislead and disrupt is already in play.”

The Home Secretary plans to urge collective action from Silicon Valley giants like Google, Meta, Apple, and YouTube when he meets with them this week. His aim is to implement “rules, transparency, and safeguards” to protect democracy from deepfake disinformation.

Cleverly’s warnings come after a series of deepfake audios imitating Labour leader Keir Starmer and London Mayor Sadiq Khan circulated online last year. Fake BBC News videos purporting to examine PM Rishi Sunak’s finances have also surfaced.

The tech meetings follow a recent pact signed by major AI companies like Adobe, Amazon, Google, and Microsoft during the Munich Security Conference to take “reasonable precautions” against disruptions caused by deepfake content during elections worldwide.

As concerns over the proliferation of deepfakes continue to grow, the world must confront the challenges they pose in shaping public discourse and potentially influencing electoral outcomes.

(Image Credit: Lauren Hurley / No 10 Downing Street under OGL 3 license)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/feed/ 0
Google pledges to fix Gemini’s inaccurate and biased image generation https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/ https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/#respond Thu, 22 Feb 2024 15:11:11 +0000 https://www.artificialintelligence-news.com/?p=14437 Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems. The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios. Google Gemini Image generation model... Read more »

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.

The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.

Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.

In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:

For now, Google says it is pausing the image generation of people:

While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.

Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.

Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.

Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:

Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:

As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.

(Photo by Matt Artz on Unsplash)

See also: Reddit is reportedly selling data for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/feed/ 0
UK announces over £100M to support ‘agile’ AI regulation https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/ https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/#respond Tue, 06 Feb 2024 11:56:31 +0000 https://www.artificialintelligence-news.com/?p=14327 The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education.  The investment comes at a vital time, as research from Thoughtworks shows... Read more »

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education. 

The investment comes at a vital time, as research from Thoughtworks shows 91% of British people argue that government regulations must do more to hold businesses accountable for their AI systems. The public wants more transparency, with 82% of consumers favouring businesses that proactively communicate how they are regulating general AI.

In a government response published today to last year’s AI Regulation White Paper consultation, the UK outlined its context-based approach to regulation that empowers existing regulators to address AI risks in a targeted way, while avoiding rushed legislation that could stifle innovation.

However, the government for the first time set out its thinking on potential future binding requirements for developers building advanced AI systems, to ensure accountability for safety – a measure 68% of the public said was needed in AI regulation. 

The response also revealed all key regulators will publish their approach to managing AI risks by 30 April, detailing their expertise and plans for the coming year. This aims to provide confidence to businesses and citizens on transparency. However, 30% still don’t think increased AI regulation is actually for their benefit, indicating scepticism remains.

Additionally, nearly £90 million was announced to launch nine new research hubs across the UK and a US partnership focused on responsible AI development. Separately, £2 million in funding will support projects defining responsible AI across sectors like policing – with 56% of the public wanting improved user education around AI.

Tom Whittaker, Senior Associate at independent UK law firm Burges Salmon, said: “The technology industry will welcome the large financial investment by the UK government to support regulators continuing what many see as an agile and sector-specific approach to AI regulation.

“The UK government is trying to position itself as pro-innovation for AI generally and across multiple sectors.  This is notable at a time when the EU is pushing ahead with its own significant AI legislation that the EU consider will boost trustworthy AI but which some consider a threat to innovation.”

Science Minister Michelle Donelan said the UK’s “innovative approach to AI regulation” has made it a leader in both AI safety and development. She said the agile, sector-specific approach allows the UK to “grip the risks immediately”, paving the way for it to reap AI’s benefits safely.

The wide-ranging funding and initiatives aim to cement the UK as a pioneer in safe AI innovation while assuaging public concerns. This builds on previous commitments like the £100 million AI Safety Institute to evaluate emerging models. 

Greg Hanson, GVP and Head of Sales EMEA North at Informatica, commented: “Undoubtedly, greater AI regulation is coming to the UK. And demand for this is escalating – especially considering half (52%) of UK businesses are already forging ahead with generative AI, above the global average of 45%.

“Yet with the adoption of AI, comes new challenges. Nearly all businesses in the UK who have adopted AI admit to having encountered roadblocks. In fact, 43% say AI governance is the main obstacle, closely followed by AI ethics (42%).”

Overall, the package of measures amounts to over £100 million of new funding towards the UK’s mission to lead on safe and responsible AI progress. This balances safely harnessing AI’s potential economic and societal benefits with a targeted approach to regulating very real risks.

(Photo by Rocco Dipoppa on Unsplash)

See also: Bank of England Governor: AI won’t lead to mass job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/feed/ 0