government Archives - AI News https://www.artificialintelligence-news.com/tag/government/ Artificial Intelligence News Mon, 29 Apr 2024 08:45:04 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png government Archives - AI News https://www.artificialintelligence-news.com/tag/government/ 32 32 OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0
UK and US sign pact to develop AI safety tests https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/ https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/#respond Tue, 02 Apr 2024 10:17:09 +0000 https://www.artificialintelligence-news.com/?p=14628 The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments. The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the... Read more »

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments.

The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models, systems, and agents.

Under the deal, the UK’s new AI Safety Institute and the upcoming US organisation will exchange research expertise with the aim of mitigating AI risks, including how to independently evaluate private AI models from companies such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” stated Donelan. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The partnership follows through on commitments made at the AI Safety Summit hosted in the UK last November. The institutes plan to build a common approach to AI safety testing and share capabilities to tackle risks effectively. They intend to conduct at least one joint public testing exercise on an openly accessible AI model and explore personnel exchanges.

Raimondo emphasised the significance of the collaboration, stating: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”

Both governments recognise AI’s rapid development and the urgent need for a shared global approach to safety that can keep pace with emerging risks. The partnership takes effect immediately, allowing seamless cooperation between the organisations.

“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future,” added Raimondo.

In addition to joint testing and capability sharing, the UK and US will exchange vital information about AI model capabilities, risks, and fundamental technical research. This aims to underpin a common scientific foundation for AI safety testing that can be adopted by researchers worldwide.

Despite the focus on risk, Donelan insisted the UK has no plans to regulate AI more broadly in the short term. In contrast, President Joe Biden has taken a stricter position on AI models that threaten national security, and the EU AI Act has adopted tougher regulations.

Industry experts welcomed the collaboration as essential for promoting trust and safety in AI development and adoption across sectors like marketing, finance, and customer service.

“Ensuring AI’s development and use are governed by trust and safety is paramount,” said Ramprakash Ramamoorthy of Zoho. “Taking safeguards to protect training data mitigates risks and bolsters confidence among those deploying AI solutions.”

Dr Henry Balani of Encompass added: “Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration, and supporting innovation in a crucial, advancing area of technology.”

(Photo by Art Lasovsky)

See also: IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/feed/ 0
IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/ https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/#respond Wed, 27 Mar 2024 10:37:59 +0000 https://www.artificialintelligence-news.com/?p=14619 A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted. The report identifies two key stages of generative... Read more »

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like database management and organisational tasks like scheduling are most at risk. 

However, in a potential second wave, AI could handle a staggering 59 percent of tasks—impacting higher-earning jobs and non-routine cognitive work like creating databases.

Bhargav Srinivasa Desikan, Senior Research Fellow at IPPR, said: “We could see jobs such as copywriters, graphic designers, and personal assistants roles being heavily affected by AI. The question is how we can steer technological change in a way that allows for novel job opportunities, increased productivity, and economic benefits for all.”

“We are at a sliding doors moment, and policy makers urgently need to develop a strategy to make sure our labour market adapts to the 21st century, without leaving millions behind. It is crucial that all workers benefit from these technological advancements, and not just the big tech corporations.”

IPPR modelled three scenarios for the second wave’s impact:

  • Worst case: 7.9 million jobs lost with no GDP gains
  • Central case: 4.4 million jobs lost but 6.3 percent GDP growth (£144bn/year) 
  • Best case: No jobs lost and 13 percent GDP boost (£306bn/year) from augmenting at-risk jobs

IPPR warns the worst-case displacement is possible without government intervention, urging a “job-centric” AI strategy with fiscal incentives, regulation ensuring human oversight, and support for green jobs less exposed to automation.

The analysis underscores the disproportionate impact on certain demographics, with women and young people bearing the brunt of job displacement. Entry-level positions, predominantly occupied by these groups, face the gravest jeopardy as AI encroaches on roles such as secretarial and customer service positions.

Carsten Jung, Senior Economist at IPPR, said: “History shows that technological transition can be a boon if well managed, or can end in disruption if left to unfold without controls. Indeed, some occupations could be hard hit by generative AI, starting with back office jobs.

“But technology isn’t destiny and a jobs apocalypse is not inevitable – government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. If they don’t act soon, it may be too late.”

A full copy of the report can be found here (PDF)

(Photo by Cullan Smith)

See also: Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/feed/ 0
AIs in India will need government permission before launching https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/ https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/#respond Mon, 04 Mar 2024 17:03:13 +0000 https://www.artificialintelligence-news.com/?p=14478 In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public. Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the... Read more »

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public.

Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the output generated.

Furthermore, the document outlines plans for implementing a “consent popup” mechanism to inform users about potential defects or errors produced by AI. It also mandates the labelling of deepfakes with permanent unique metadata or other identifiers to prevent misuse.

In addition to these measures, the advisory orders all intermediaries or platforms to ensure that any AI model product – including large language models (LLM) – does not permit bias, discrimination, or threaten the integrity of the electoral process.

Some industry figures have criticised India’s plans as going too far:

Developers are requested to comply with the advisory within 15 days of its issuance. It has been suggested that after compliance and application for permission to release a product, developers may be required to perform a demo for government officials or undergo stress testing.

Although the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector.

“We are doing it as an advisory today asking you (the AI platforms) to comply with it,” said IT minister Rajeev Chandrasekhar. He added that this stance would eventually be encoded in legislation.

“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing,” continued Chandrasekhar, as reported by local media.

(Photo by Naveed Ahmed on Unsplash)

See also: Elon Musk sues OpenAI over alleged breach of nonprofit agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/feed/ 0
UK and France to collaborate on AI following Horizon membership https://www.artificialintelligence-news.com/2024/02/29/uk-and-france-collaborate-ai-following-horizon-membership/ https://www.artificialintelligence-news.com/2024/02/29/uk-and-france-collaborate-ai-following-horizon-membership/#respond Thu, 29 Feb 2024 10:07:19 +0000 https://www.artificialintelligence-news.com/?p=14467 The UK and France have announced new funding initiatives and partnerships aimed at advancing global AI safety. The developments come in the wake of the UK’s association with Horizon Europe, a move that was broadly seen as putting the divisions of Brexit in the past and the repairing of relations for the good of the... Read more »

The post UK and France to collaborate on AI following Horizon membership appeared first on AI News.

]]>
The UK and France have announced new funding initiatives and partnerships aimed at advancing global AI safety. The developments come in the wake of the UK’s association with Horizon Europe, a move that was broadly seen as putting the divisions of Brexit in the past and the repairing of relations for the good of the continent.

French Minister for Higher Education and Research, Sylvie Retailleau, is scheduled to meet with UK Secretary of State Michelle Donelan in London today for discussions marking a pivotal moment in bilateral scientific cooperation.

Building upon a rich history of collaboration that has yielded groundbreaking innovations such as the Concorde and the Channel Tunnel, the ministers will endorse a joint declaration aimed at deepening research ties between the two nations. This includes a commitment of £800,000 in new funding towards joint research efforts, particularly within the framework of Horizon Europe.

A landmark partnership between the UK’s AI Safety Institute and France’s Inria will also be unveiled, signifying a shared commitment to the responsible development of AI technology. This collaboration is timely, given France’s upcoming hosting of the AI Safety Summit later this year—which aims to build upon previous agreements and discussions on frontier AI testing achieved during the UK edition last year.

Furthermore, the establishment of the French-British joint committee on Science, Technology, and Innovation represents an opportunity to foster cooperation across a range of fields, including low-carbon hydrogen, space observation, AI, and research security.

UK Secretary of State Michelle Donelan said:

“The links between the UK and France’s brightest minds are deep and longstanding, from breakthroughs in aerospace to tackling climate change. It is only right that we support our innovators, to unleash the power of their ideas to create jobs and grow businesses in concert with our closest neighbour on the continent.

Research is fundamentally collaborative, and alongside our bespoke deal on Horizon Europe, this deepening partnership with France – along with our joint work on AI safety – is another key step in realising the UK’s science superpower ambitions.”

The collaboration between the UK and France underscores their shared commitment to advancing scientific research and innovation, with a focus on emerging technologies such as AI and quantum.

Sylvie Retailleau, French Minister of Higher Education and Research, commented:

“This joint committee is a perfect illustration of the international component of research – from identifying key priorities such as hydrogen, AI, space and research security – to enabling collaborative work and exchange of ideas and good practices through funding.

Doing so with a trusted partner as the UK – who just associated to Horizon Europe – is a great opportunity to strengthen France’s science capabilities abroad, and participate in Europe’s strategic autonomy openness.”

As the UK continues to deepen its engagement with global partners in the field of science and technology, these bilateral agreements serve as a testament to its ambition to lead the way in scientific discovery and innovation on the world stage.

(Photo by Aleks Marinkovic on Unsplash)

See also: UK Home Secretary sounds alarm over deepfakes ahead of elections

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and France to collaborate on AI following Horizon membership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/29/uk-and-france-collaborate-ai-following-horizon-membership/feed/ 0
UK Home Secretary sounds alarm over deepfakes ahead of elections https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/ https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/#respond Mon, 26 Feb 2024 16:46:48 +0000 https://www.artificialintelligence-news.com/?p=14448 Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies.  Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK... Read more »

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies. 

Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK but globally. He warned that malign actors working on behalf of nations like Russia and Iran could generate thousands of highly realistic deepfake images and videos to disrupt the democratic process.

“Increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere,” Cleverly told the newspaper. “The era of deepfake and AI-generated content to mislead and disrupt is already in play.”

The Home Secretary plans to urge collective action from Silicon Valley giants like Google, Meta, Apple, and YouTube when he meets with them this week. His aim is to implement “rules, transparency, and safeguards” to protect democracy from deepfake disinformation.

Cleverly’s warnings come after a series of deepfake audios imitating Labour leader Keir Starmer and London Mayor Sadiq Khan circulated online last year. Fake BBC News videos purporting to examine PM Rishi Sunak’s finances have also surfaced.

The tech meetings follow a recent pact signed by major AI companies like Adobe, Amazon, Google, and Microsoft during the Munich Security Conference to take “reasonable precautions” against disruptions caused by deepfake content during elections worldwide.

As concerns over the proliferation of deepfakes continue to grow, the world must confront the challenges they pose in shaping public discourse and potentially influencing electoral outcomes.

(Image Credit: Lauren Hurley / No 10 Downing Street under OGL 3 license)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/feed/ 0
UK announces over £100M to support ‘agile’ AI regulation https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/ https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/#respond Tue, 06 Feb 2024 11:56:31 +0000 https://www.artificialintelligence-news.com/?p=14327 The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education.  The investment comes at a vital time, as research from Thoughtworks shows... Read more »

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education. 

The investment comes at a vital time, as research from Thoughtworks shows 91% of British people argue that government regulations must do more to hold businesses accountable for their AI systems. The public wants more transparency, with 82% of consumers favouring businesses that proactively communicate how they are regulating general AI.

In a government response published today to last year’s AI Regulation White Paper consultation, the UK outlined its context-based approach to regulation that empowers existing regulators to address AI risks in a targeted way, while avoiding rushed legislation that could stifle innovation.

However, the government for the first time set out its thinking on potential future binding requirements for developers building advanced AI systems, to ensure accountability for safety – a measure 68% of the public said was needed in AI regulation. 

The response also revealed all key regulators will publish their approach to managing AI risks by 30 April, detailing their expertise and plans for the coming year. This aims to provide confidence to businesses and citizens on transparency. However, 30% still don’t think increased AI regulation is actually for their benefit, indicating scepticism remains.

Additionally, nearly £90 million was announced to launch nine new research hubs across the UK and a US partnership focused on responsible AI development. Separately, £2 million in funding will support projects defining responsible AI across sectors like policing – with 56% of the public wanting improved user education around AI.

Tom Whittaker, Senior Associate at independent UK law firm Burges Salmon, said: “The technology industry will welcome the large financial investment by the UK government to support regulators continuing what many see as an agile and sector-specific approach to AI regulation.

“The UK government is trying to position itself as pro-innovation for AI generally and across multiple sectors.  This is notable at a time when the EU is pushing ahead with its own significant AI legislation that the EU consider will boost trustworthy AI but which some consider a threat to innovation.”

Science Minister Michelle Donelan said the UK’s “innovative approach to AI regulation” has made it a leader in both AI safety and development. She said the agile, sector-specific approach allows the UK to “grip the risks immediately”, paving the way for it to reap AI’s benefits safely.

The wide-ranging funding and initiatives aim to cement the UK as a pioneer in safe AI innovation while assuaging public concerns. This builds on previous commitments like the £100 million AI Safety Institute to evaluate emerging models. 

Greg Hanson, GVP and Head of Sales EMEA North at Informatica, commented: “Undoubtedly, greater AI regulation is coming to the UK. And demand for this is escalating – especially considering half (52%) of UK businesses are already forging ahead with generative AI, above the global average of 45%.

“Yet with the adoption of AI, comes new challenges. Nearly all businesses in the UK who have adopted AI admit to having encountered roadblocks. In fact, 43% say AI governance is the main obstacle, closely followed by AI ethics (42%).”

Overall, the package of measures amounts to over £100 million of new funding towards the UK’s mission to lead on safe and responsible AI progress. This balances safely harnessing AI’s potential economic and societal benefits with a targeted approach to regulating very real risks.

(Photo by Rocco Dipoppa on Unsplash)

See also: Bank of England Governor: AI won’t lead to mass job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/feed/ 0
AUKUS trial advances AI for military operations  https://www.artificialintelligence-news.com/2024/02/05/aukus-trial-advances-ai-for-military-operations/ https://www.artificialintelligence-news.com/2024/02/05/aukus-trial-advances-ai-for-military-operations/#respond Mon, 05 Feb 2024 16:29:13 +0000 https://www.artificialintelligence-news.com/?p=14324 The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems.  The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the... Read more »

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems. 

The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the AUKUS partnership formed last year between the three countries. It aimed to test robotic vehicles and sensors in situations involving electronic attacks, GPS disruption, and other threats to evaluate the resilience of autonomous systems expected to play a major role in future military operations.

Understanding how to ensure these AI systems can operate reliably in the face of modern electronic warfare and cyber threats will be critical before the technology can be more widely adopted.  

The TORVICE trial featured US and British autonomous vehicles carrying out reconnaissance missions while Australia units simulated battlefield electronic attacks on their systems. Analysis of the performance data will help strengthen protections and safeguards needed to prevent system failures or disruptions.

Guy Powell, Dstl’s technical authority for the trial, said: “The TORVICE trial aims to understand the capabilities of robotic and autonomous systems to operate in contested environments. We need to understand how robust these systems are when subject to attack.

“Robotic and autonomous systems are a transformational capability that we are introducing to armies across all three nations.” 

This builds on the first AUKUS autonomous systems trial held in April 2023 in the UK. It also represents a step forward following the AUKUS defense ministers’ December announcement that Resilient and Autonomous Artificial Intelligence Technologies (RAAIT) would be integrated into the three countries’ military forces beginning in 2024.

Dstl military advisor Lt Col Russ Atherton says that successfully harnessing AI and autonomy promises to “be an absolute game-changer” that reduces the risk to soldiers. The technology could carry out key tasks like sensor operation and logistics over wider areas.

“The ability to deploy different payloads such as sensors and logistics across a larger battlespace will give commanders greater options than currently exist,” explained Lt Atherton.

By collaborating, the AUKUS allies aim to accelerate development in this crucial new area of warfare, improving interoperability between their forces, maximising their expertise, and strengthening deterrence in the Indo-Pacific region.

As AUKUS continues to deepen cooperation on cutting-edge military technologies, this collaborative effort will significantly enhance military capabilities while reducing risks for warfighters.

(Image Credit: Dstl)

See also: Experts from 30 nations will contribute to global AI safety report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/05/aukus-trial-advances-ai-for-military-operations/feed/ 0
Experts from 30 nations will contribute to global AI safety report https://www.artificialintelligence-news.com/2024/02/01/experts-from-30-nations-contribute-global-ai-safety-report/ https://www.artificialintelligence-news.com/2024/02/01/experts-from-30-nations-contribute-global-ai-safety-report/#respond Thu, 01 Feb 2024 17:00:29 +0000 https://www.artificialintelligence-news.com/?p=14314 Leading experts from 30 nations across the globe will advise on a landmark report assessing the capabilities and risks of AI systems.  The International Scientific Report on Advanced AI Safety aims to bring together the best scientific research on AI safety to inform policymakers and future discussions on the safe development of AI technology. The... Read more »

The post Experts from 30 nations will contribute to global AI safety report appeared first on AI News.

]]>
Leading experts from 30 nations across the globe will advise on a landmark report assessing the capabilities and risks of AI systems. 

The International Scientific Report on Advanced AI Safety aims to bring together the best scientific research on AI safety to inform policymakers and future discussions on the safe development of AI technology. The report builds on the legacy of last November’s UK AI Safety Summit, where countries signed the Bletchley Declaration agreeing to collaborate on AI safety issues.

An impressive Expert Advisory Panel featuring 32 prominent international figures – including chief technology officers, UN envoys, and national chief scientific advisers – has been unveiled. The panel includes experts like Dr Hiroaki Kitano, CTO of Sony in Japan, Amandeep Gill, UN Envoy on Technology, and the UK’s Dame Angela McLean, Chief Scientific Adviser.

This crack team of global talent will play a crucial role advising on the report’s development and content to ensure it comprehensively and objectively assesses the capabilities and risks of advanced AI. Their regular input throughout the drafting process will help build broad consensus on vital global AI safety research.

Initial findings from the report are due to be published ahead of South Korea’s AI Safety Summit this spring. A second more complete publication will then coincide with France’s summit later this year, helping inform discussions at both events.

The international report will follow a paper published by the UK last year which included declassified information from intelligence services and highlighted the risks associated with frontier AI.

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, said: “The International Scientific Report on Advanced AI Safety will be a landmark publication, bringing the best scientific research on the risks and capabilities of frontier AI development under one roof.

“The report is one part of the enduring legacy of November’s AI Safety Summit, and I am delighted that countries who agreed the Bletchley Declaration will join us in its development.”

Professor Yoshua Bengio, pioneer AI researcher from Quebec’s Mila Institute, said the publication “will be an important tool in helping to inform the discussions at AI Safety Summits being held by the Republic of Korea and France later this year.”

The principles guiding the report’s development – inspired by the IPCC climate change assessments – are comprehensiveness, objectivity, transparency, and scientific assessment. This framework aims to ensure a thorough and balanced evaluation of AI’s risks.

A list of all participating countries and their nominated representatives can be found here.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK and Canada sign AI compute agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Experts from 30 nations will contribute to global AI safety report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/01/experts-from-30-nations-contribute-global-ai-safety-report/feed/ 0