safety Archives - AI News https://www.artificialintelligence-news.com/tag/safety/ Artificial Intelligence News Fri, 12 Apr 2024 12:03:51 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png safety Archives - AI News https://www.artificialintelligence-news.com/tag/safety/ 32 32 UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0
UK and US sign pact to develop AI safety tests https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/ https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/#respond Tue, 02 Apr 2024 10:17:09 +0000 https://www.artificialintelligence-news.com/?p=14628 The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments. The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the... Read more »

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments.

The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models, systems, and agents.

Under the deal, the UK’s new AI Safety Institute and the upcoming US organisation will exchange research expertise with the aim of mitigating AI risks, including how to independently evaluate private AI models from companies such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” stated Donelan. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The partnership follows through on commitments made at the AI Safety Summit hosted in the UK last November. The institutes plan to build a common approach to AI safety testing and share capabilities to tackle risks effectively. They intend to conduct at least one joint public testing exercise on an openly accessible AI model and explore personnel exchanges.

Raimondo emphasised the significance of the collaboration, stating: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”

Both governments recognise AI’s rapid development and the urgent need for a shared global approach to safety that can keep pace with emerging risks. The partnership takes effect immediately, allowing seamless cooperation between the organisations.

“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future,” added Raimondo.

In addition to joint testing and capability sharing, the UK and US will exchange vital information about AI model capabilities, risks, and fundamental technical research. This aims to underpin a common scientific foundation for AI safety testing that can be adopted by researchers worldwide.

Despite the focus on risk, Donelan insisted the UK has no plans to regulate AI more broadly in the short term. In contrast, President Joe Biden has taken a stricter position on AI models that threaten national security, and the EU AI Act has adopted tougher regulations.

Industry experts welcomed the collaboration as essential for promoting trust and safety in AI development and adoption across sectors like marketing, finance, and customer service.

“Ensuring AI’s development and use are governed by trust and safety is paramount,” said Ramprakash Ramamoorthy of Zoho. “Taking safeguards to protect training data mitigates risks and bolsters confidence among those deploying AI solutions.”

Dr Henry Balani of Encompass added: “Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration, and supporting innovation in a crucial, advancing area of technology.”

(Photo by Art Lasovsky)

See also: IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/feed/ 0
UK and France to collaborate on AI following Horizon membership https://www.artificialintelligence-news.com/2024/02/29/uk-and-france-collaborate-ai-following-horizon-membership/ https://www.artificialintelligence-news.com/2024/02/29/uk-and-france-collaborate-ai-following-horizon-membership/#respond Thu, 29 Feb 2024 10:07:19 +0000 https://www.artificialintelligence-news.com/?p=14467 The UK and France have announced new funding initiatives and partnerships aimed at advancing global AI safety. The developments come in the wake of the UK’s association with Horizon Europe, a move that was broadly seen as putting the divisions of Brexit in the past and the repairing of relations for the good of the... Read more »

The post UK and France to collaborate on AI following Horizon membership appeared first on AI News.

]]>
The UK and France have announced new funding initiatives and partnerships aimed at advancing global AI safety. The developments come in the wake of the UK’s association with Horizon Europe, a move that was broadly seen as putting the divisions of Brexit in the past and the repairing of relations for the good of the continent.

French Minister for Higher Education and Research, Sylvie Retailleau, is scheduled to meet with UK Secretary of State Michelle Donelan in London today for discussions marking a pivotal moment in bilateral scientific cooperation.

Building upon a rich history of collaboration that has yielded groundbreaking innovations such as the Concorde and the Channel Tunnel, the ministers will endorse a joint declaration aimed at deepening research ties between the two nations. This includes a commitment of £800,000 in new funding towards joint research efforts, particularly within the framework of Horizon Europe.

A landmark partnership between the UK’s AI Safety Institute and France’s Inria will also be unveiled, signifying a shared commitment to the responsible development of AI technology. This collaboration is timely, given France’s upcoming hosting of the AI Safety Summit later this year—which aims to build upon previous agreements and discussions on frontier AI testing achieved during the UK edition last year.

Furthermore, the establishment of the French-British joint committee on Science, Technology, and Innovation represents an opportunity to foster cooperation across a range of fields, including low-carbon hydrogen, space observation, AI, and research security.

UK Secretary of State Michelle Donelan said:

“The links between the UK and France’s brightest minds are deep and longstanding, from breakthroughs in aerospace to tackling climate change. It is only right that we support our innovators, to unleash the power of their ideas to create jobs and grow businesses in concert with our closest neighbour on the continent.

Research is fundamentally collaborative, and alongside our bespoke deal on Horizon Europe, this deepening partnership with France – along with our joint work on AI safety – is another key step in realising the UK’s science superpower ambitions.”

The collaboration between the UK and France underscores their shared commitment to advancing scientific research and innovation, with a focus on emerging technologies such as AI and quantum.

Sylvie Retailleau, French Minister of Higher Education and Research, commented:

“This joint committee is a perfect illustration of the international component of research – from identifying key priorities such as hydrogen, AI, space and research security – to enabling collaborative work and exchange of ideas and good practices through funding.

Doing so with a trusted partner as the UK – who just associated to Horizon Europe – is a great opportunity to strengthen France’s science capabilities abroad, and participate in Europe’s strategic autonomy openness.”

As the UK continues to deepen its engagement with global partners in the field of science and technology, these bilateral agreements serve as a testament to its ambition to lead the way in scientific discovery and innovation on the world stage.

(Photo by Aleks Marinkovic on Unsplash)

See also: UK Home Secretary sounds alarm over deepfakes ahead of elections

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and France to collaborate on AI following Horizon membership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/29/uk-and-france-collaborate-ai-following-horizon-membership/feed/ 0
Experts from 30 nations will contribute to global AI safety report https://www.artificialintelligence-news.com/2024/02/01/experts-from-30-nations-contribute-global-ai-safety-report/ https://www.artificialintelligence-news.com/2024/02/01/experts-from-30-nations-contribute-global-ai-safety-report/#respond Thu, 01 Feb 2024 17:00:29 +0000 https://www.artificialintelligence-news.com/?p=14314 Leading experts from 30 nations across the globe will advise on a landmark report assessing the capabilities and risks of AI systems.  The International Scientific Report on Advanced AI Safety aims to bring together the best scientific research on AI safety to inform policymakers and future discussions on the safe development of AI technology. The... Read more »

The post Experts from 30 nations will contribute to global AI safety report appeared first on AI News.

]]>
Leading experts from 30 nations across the globe will advise on a landmark report assessing the capabilities and risks of AI systems. 

The International Scientific Report on Advanced AI Safety aims to bring together the best scientific research on AI safety to inform policymakers and future discussions on the safe development of AI technology. The report builds on the legacy of last November’s UK AI Safety Summit, where countries signed the Bletchley Declaration agreeing to collaborate on AI safety issues.

An impressive Expert Advisory Panel featuring 32 prominent international figures – including chief technology officers, UN envoys, and national chief scientific advisers – has been unveiled. The panel includes experts like Dr Hiroaki Kitano, CTO of Sony in Japan, Amandeep Gill, UN Envoy on Technology, and the UK’s Dame Angela McLean, Chief Scientific Adviser.

This crack team of global talent will play a crucial role advising on the report’s development and content to ensure it comprehensively and objectively assesses the capabilities and risks of advanced AI. Their regular input throughout the drafting process will help build broad consensus on vital global AI safety research.

Initial findings from the report are due to be published ahead of South Korea’s AI Safety Summit this spring. A second more complete publication will then coincide with France’s summit later this year, helping inform discussions at both events.

The international report will follow a paper published by the UK last year which included declassified information from intelligence services and highlighted the risks associated with frontier AI.

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, said: “The International Scientific Report on Advanced AI Safety will be a landmark publication, bringing the best scientific research on the risks and capabilities of frontier AI development under one roof.

“The report is one part of the enduring legacy of November’s AI Safety Summit, and I am delighted that countries who agreed the Bletchley Declaration will join us in its development.”

Professor Yoshua Bengio, pioneer AI researcher from Quebec’s Mila Institute, said the publication “will be an important tool in helping to inform the discussions at AI Safety Summits being held by the Republic of Korea and France later this year.”

The principles guiding the report’s development – inspired by the IPCC climate change assessments – are comprehensiveness, objectivity, transparency, and scientific assessment. This framework aims to ensure a thorough and balanced evaluation of AI’s risks.

A list of all participating countries and their nominated representatives can be found here.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK and Canada sign AI compute agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Experts from 30 nations will contribute to global AI safety report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/01/experts-from-30-nations-contribute-global-ai-safety-report/feed/ 0
DHS AI roadmap prioritises cybersecurity and national safety https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/ https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/#respond Wed, 15 Nov 2023 10:10:47 +0000 https://www.artificialintelligence-news.com/?p=13893 The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI. Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order. “DHS has a broad leadership role in... Read more »

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI.

Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order.

“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said Secretary of Homeland Security Alejandro N. Mayorkas.

“The Biden-Harris Administration is committed to building a secure and resilient digital ecosystem that promotes innovation and technological progress.” 

Following the Executive Order, DHS is mandated to globally promote AI safety standards, safeguard US networks and critical infrastructure, and address risks associated with AI—including potential use “to create weapons of mass destruction”.

“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” added Mayorkas.

“CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”

CISA’s roadmap outlines five strategic lines of effort, providing a blueprint for concrete initiatives and a responsible approach to integrating AI into cybersecurity.

CISA Director Jen Easterly highlighted the dual nature of AI, acknowledging its promise in enhancing cybersecurity while acknowledging the immense risks it poses.

“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented Easterly.

“Our Roadmap for AI – focused at the nexus of AI, cyber defense, and critical infrastructure – sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

The outlined lines of effort are as follows:

  • Responsibly use AI to support our mission: CISA commits to using AI-enabled tools ethically and responsibly to strengthen cyber defense and support its critical infrastructure mission. The adoption of AI will align with constitutional principles and all relevant laws and policies.
  • Assess and Assure AI systems: CISA will assess and assist in secure AI-based software adoption across various stakeholders, establishing assurance through best practices and guidance for secure and resilient AI development.
  • Protect critical infrastructure from malicious use of AI: CISA will evaluate and recommend mitigation of AI threats to critical infrastructure, collaborating with government agencies and industry partners. The establishment of JCDC.AI aims to facilitate focused collaboration on AI-related threats.
  • Collaborate and communicate on key AI efforts: CISA commits to contributing to interagency efforts, supporting policy approaches for the US government’s national strategy on cybersecurity and AI, and coordinating with international partners to advance global AI security practices.
  • Expand AI expertise in our workforce: CISA will educate its workforce on AI systems and techniques, actively recruiting individuals with AI expertise and ensuring a comprehensive understanding of the legal, ethical, and policy aspects of AI-based software systems.

“This is a step in the right direction. It shows the government is taking the potential threats and benefits of AI seriously. The roadmap outlines a comprehensive strategy for leveraging AI to enhance cybersecurity, protect critical infrastructure, and foster collaboration. It also emphasises the importance of security in AI system design and development,” explains Joseph Thacker, AI and security researcher at AppOmni.

“The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale.”

CISA invites stakeholders, partners, and the public to explore the Roadmap for Artificial Intelligence and gain insights into the strategic vision for AI technology and cybersecurity here.

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/15/dhs-ai-roadmap-prioritises-cybersecurity-national-safety/feed/ 0
Google expands partnership with Anthropic to enhance AI safety https://www.artificialintelligence-news.com/2023/11/10/google-expands-partnership-anthropic-enhance-ai-safety/ https://www.artificialintelligence-news.com/2023/11/10/google-expands-partnership-anthropic-enhance-ai-safety/#respond Fri, 10 Nov 2023 15:56:36 +0000 https://www.artificialintelligence-news.com/?p=13870 Google has announced the expansion of its partnership with Anthropic to work towards achieving the highest standards of AI safety. The collaboration between Google and Anthropic dates back to the founding of Anthropic in 2021. The two companies have closely collaborated, with Anthropic building one of the largest Google Kubernetes Engine (GKE) clusters in the... Read more »

The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.

]]>
Google has announced the expansion of its partnership with Anthropic to work towards achieving the highest standards of AI safety.

The collaboration between Google and Anthropic dates back to the founding of Anthropic in 2021. The two companies have closely collaborated, with Anthropic building one of the largest Google Kubernetes Engine (GKE) clusters in the industry.

“Our longstanding partnership with Google is founded on a shared commitment to develop AI responsibly and deploy it in a way that benefits society,” said Dario Amodei, co-founder and CEO of Anthropic.

“We look forward to our continued collaboration as we work to make steerable, reliable and interpretable AI systems available to more businesses around the world.”

Anthropic utilises Google’s AlloyDB, a fully managed PostgreSQL-compatible database, for handling transactional data with high performance and reliability. Additionally, Google’s BigQuery data warehouse is employed to analyse vast datasets, extracting valuable insights for Anthropic’s operations.

As part of the expanded partnership, Anthropic will leverage Google’s latest generation Cloud TPU v5e chips for AI inference. Anthropic will use the chips to efficiently scale its powerful Claude large language model, which ranks only behind GPT-4 in many benchmarks.

The announcement comes on the heels of both companies participating in the inaugural AI Safety Summit (AISS) at Bletchley Park, hosted by the UK government. The summit brought together government officials, technology leaders, and experts to address concerns around frontier AI.

Google and Anthropic are also engaged in collaborative efforts with the Frontier Model Forum and MLCommons, contributing to the development of robust measures for AI safety.

To enhance security for organisations deploying Anthropic’s models on Google Cloud, Anthropic is now utilising Google Cloud’s security services. This includes Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center, providing visibility, threat detection, and access control.

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” commented Thomas Kurian, CEO of Google Cloud. 

“This expanded partnership with Anthropic – built on years of working together – will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Google and Anthropic’s expanded partnership promises to be a critical step in advancing AI safety standards and fostering responsible development.

(Photo by charlesdeluvio on Unsplash)

See also: Amazon is building a LLM to rival OpenAI and Google

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/10/google-expands-partnership-anthropic-enhance-ai-safety/feed/ 0
NIST announces AI consortium to shape US policies https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/ https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/#respond Fri, 03 Nov 2023 10:13:14 +0000 https://www.artificialintelligence-news.com/?p=13831 In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.  This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.... Read more »

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.

Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.

While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.

President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.

Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.

The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.

(Photo by Muhammad Rizki on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
White House secures safety commitments from eight more AI companies https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/ https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/#respond Wed, 13 Sep 2023 14:56:10 +0000 https://www.artificialintelligence-news.com/?p=13585 The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of... Read more »

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.

The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:

  1. Ensure products are safe before introduction:

The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.

They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.

  1. Build systems with security as a top priority:

The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.

Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.

  1. Earn the public’s trust:

To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.

They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.

These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.

The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.

(Photo by Tabrez Syed on Unsplash)

See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/feed/ 0