The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.
]]>The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.
This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.
“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”
Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.
“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”
Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.
Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.
The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.
See also: US and Japan announce sweeping AI and tech collaboration
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.
]]>The post UK and US sign pact to develop AI safety tests appeared first on AI News.
]]>The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models, systems, and agents.
Under the deal, the UK’s new AI Safety Institute and the upcoming US organisation will exchange research expertise with the aim of mitigating AI risks, including how to independently evaluate private AI models from companies such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.
“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” stated Donelan. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”
The partnership follows through on commitments made at the AI Safety Summit hosted in the UK last November. The institutes plan to build a common approach to AI safety testing and share capabilities to tackle risks effectively. They intend to conduct at least one joint public testing exercise on an openly accessible AI model and explore personnel exchanges.
Raimondo emphasised the significance of the collaboration, stating: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”
Both governments recognise AI’s rapid development and the urgent need for a shared global approach to safety that can keep pace with emerging risks. The partnership takes effect immediately, allowing seamless cooperation between the organisations.
“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future,” added Raimondo.
In addition to joint testing and capability sharing, the UK and US will exchange vital information about AI model capabilities, risks, and fundamental technical research. This aims to underpin a common scientific foundation for AI safety testing that can be adopted by researchers worldwide.
Despite the focus on risk, Donelan insisted the UK has no plans to regulate AI more broadly in the short term. In contrast, President Joe Biden has taken a stricter position on AI models that threaten national security, and the EU AI Act has adopted tougher regulations.
Industry experts welcomed the collaboration as essential for promoting trust and safety in AI development and adoption across sectors like marketing, finance, and customer service.
“Ensuring AI’s development and use are governed by trust and safety is paramount,” said Ramprakash Ramamoorthy of Zoho. “Taking safeguards to protect training data mitigates risks and bolsters confidence among those deploying AI solutions.”
Dr Henry Balani of Encompass added: “Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration, and supporting innovation in a crucial, advancing area of technology.”
(Photo by Art Lasovsky)
See also: IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post UK and US sign pact to develop AI safety tests appeared first on AI News.
]]>The post UK and France to collaborate on AI following Horizon membership appeared first on AI News.
]]>French Minister for Higher Education and Research, Sylvie Retailleau, is scheduled to meet with UK Secretary of State Michelle Donelan in London today for discussions marking a pivotal moment in bilateral scientific cooperation.
Building upon a rich history of collaboration that has yielded groundbreaking innovations such as the Concorde and the Channel Tunnel, the ministers will endorse a joint declaration aimed at deepening research ties between the two nations. This includes a commitment of £800,000 in new funding towards joint research efforts, particularly within the framework of Horizon Europe.
A landmark partnership between the UK’s AI Safety Institute and France’s Inria will also be unveiled, signifying a shared commitment to the responsible development of AI technology. This collaboration is timely, given France’s upcoming hosting of the AI Safety Summit later this year—which aims to build upon previous agreements and discussions on frontier AI testing achieved during the UK edition last year.
Furthermore, the establishment of the French-British joint committee on Science, Technology, and Innovation represents an opportunity to foster cooperation across a range of fields, including low-carbon hydrogen, space observation, AI, and research security.
UK Secretary of State Michelle Donelan said:
“The links between the UK and France’s brightest minds are deep and longstanding, from breakthroughs in aerospace to tackling climate change. It is only right that we support our innovators, to unleash the power of their ideas to create jobs and grow businesses in concert with our closest neighbour on the continent.
Research is fundamentally collaborative, and alongside our bespoke deal on Horizon Europe, this deepening partnership with France – along with our joint work on AI safety – is another key step in realising the UK’s science superpower ambitions.”
The collaboration between the UK and France underscores their shared commitment to advancing scientific research and innovation, with a focus on emerging technologies such as AI and quantum.
Sylvie Retailleau, French Minister of Higher Education and Research, commented:
“This joint committee is a perfect illustration of the international component of research – from identifying key priorities such as hydrogen, AI, space and research security – to enabling collaborative work and exchange of ideas and good practices through funding.
Doing so with a trusted partner as the UK – who just associated to Horizon Europe – is a great opportunity to strengthen France’s science capabilities abroad, and participate in Europe’s strategic autonomy openness.”
As the UK continues to deepen its engagement with global partners in the field of science and technology, these bilateral agreements serve as a testament to its ambition to lead the way in scientific discovery and innovation on the world stage.
(Photo by Aleks Marinkovic on Unsplash)
See also: UK Home Secretary sounds alarm over deepfakes ahead of elections
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post UK and France to collaborate on AI following Horizon membership appeared first on AI News.
]]>The post Experts from 30 nations will contribute to global AI safety report appeared first on AI News.
]]>The International Scientific Report on Advanced AI Safety aims to bring together the best scientific research on AI safety to inform policymakers and future discussions on the safe development of AI technology. The report builds on the legacy of last November’s UK AI Safety Summit, where countries signed the Bletchley Declaration agreeing to collaborate on AI safety issues.
An impressive Expert Advisory Panel featuring 32 prominent international figures – including chief technology officers, UN envoys, and national chief scientific advisers – has been unveiled. The panel includes experts like Dr Hiroaki Kitano, CTO of Sony in Japan, Amandeep Gill, UN Envoy on Technology, and the UK’s Dame Angela McLean, Chief Scientific Adviser.
This crack team of global talent will play a crucial role advising on the report’s development and content to ensure it comprehensively and objectively assesses the capabilities and risks of advanced AI. Their regular input throughout the drafting process will help build broad consensus on vital global AI safety research.
Initial findings from the report are due to be published ahead of South Korea’s AI Safety Summit this spring. A second more complete publication will then coincide with France’s summit later this year, helping inform discussions at both events.
The international report will follow a paper published by the UK last year which included declassified information from intelligence services and highlighted the risks associated with frontier AI.
Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, said: “The International Scientific Report on Advanced AI Safety will be a landmark publication, bringing the best scientific research on the risks and capabilities of frontier AI development under one roof.
“The report is one part of the enduring legacy of November’s AI Safety Summit, and I am delighted that countries who agreed the Bletchley Declaration will join us in its development.”
Professor Yoshua Bengio, pioneer AI researcher from Quebec’s Mila Institute, said the publication “will be an important tool in helping to inform the discussions at AI Safety Summits being held by the Republic of Korea and France later this year.”
The principles guiding the report’s development – inspired by the IPCC climate change assessments – are comprehensiveness, objectivity, transparency, and scientific assessment. This framework aims to ensure a thorough and balanced evaluation of AI’s risks.
A list of all participating countries and their nominated representatives can be found here.
(Photo by Ricardo Gomez Angel on Unsplash)
See also: UK and Canada sign AI compute agreement
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Experts from 30 nations will contribute to global AI safety report appeared first on AI News.
]]>The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.
]]>Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order.
“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said Secretary of Homeland Security Alejandro N. Mayorkas.
“The Biden-Harris Administration is committed to building a secure and resilient digital ecosystem that promotes innovation and technological progress.”
Following the Executive Order, DHS is mandated to globally promote AI safety standards, safeguard US networks and critical infrastructure, and address risks associated with AI—including potential use “to create weapons of mass destruction”.
“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” added Mayorkas.
“CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”
CISA’s roadmap outlines five strategic lines of effort, providing a blueprint for concrete initiatives and a responsible approach to integrating AI into cybersecurity.
CISA Director Jen Easterly highlighted the dual nature of AI, acknowledging its promise in enhancing cybersecurity while acknowledging the immense risks it poses.
“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented Easterly.
“Our Roadmap for AI – focused at the nexus of AI, cyber defense, and critical infrastructure – sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”
The outlined lines of effort are as follows:
“This is a step in the right direction. It shows the government is taking the potential threats and benefits of AI seriously. The roadmap outlines a comprehensive strategy for leveraging AI to enhance cybersecurity, protect critical infrastructure, and foster collaboration. It also emphasises the importance of security in AI system design and development,” explains Joseph Thacker, AI and security researcher at AppOmni.
“The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale.”
CISA invites stakeholders, partners, and the public to explore the Roadmap for Artificial Intelligence and gain insights into the strategic vision for AI technology and cybersecurity here.
See also: Google expands partnership with Anthropic to enhance AI safety
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.
]]>The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.
]]>The collaboration between Google and Anthropic dates back to the founding of Anthropic in 2021. The two companies have closely collaborated, with Anthropic building one of the largest Google Kubernetes Engine (GKE) clusters in the industry.
“Our longstanding partnership with Google is founded on a shared commitment to develop AI responsibly and deploy it in a way that benefits society,” said Dario Amodei, co-founder and CEO of Anthropic.
“We look forward to our continued collaboration as we work to make steerable, reliable and interpretable AI systems available to more businesses around the world.”
Anthropic utilises Google’s AlloyDB, a fully managed PostgreSQL-compatible database, for handling transactional data with high performance and reliability. Additionally, Google’s BigQuery data warehouse is employed to analyse vast datasets, extracting valuable insights for Anthropic’s operations.
As part of the expanded partnership, Anthropic will leverage Google’s latest generation Cloud TPU v5e chips for AI inference. Anthropic will use the chips to efficiently scale its powerful Claude large language model, which ranks only behind GPT-4 in many benchmarks.
The announcement comes on the heels of both companies participating in the inaugural AI Safety Summit (AISS) at Bletchley Park, hosted by the UK government. The summit brought together government officials, technology leaders, and experts to address concerns around frontier AI.
Google and Anthropic are also engaged in collaborative efforts with the Frontier Model Forum and MLCommons, contributing to the development of robust measures for AI safety.
To enhance security for organisations deploying Anthropic’s models on Google Cloud, Anthropic is now utilising Google Cloud’s security services. This includes Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center, providing visibility, threat detection, and access control.
“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” commented Thomas Kurian, CEO of Google Cloud.
“This expanded partnership with Anthropic – built on years of working together – will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”
Google and Anthropic’s expanded partnership promises to be a critical step in advancing AI safety standards and fostering responsible development.
(Photo by charlesdeluvio on Unsplash)
See also: Amazon is building a LLM to rival OpenAI and Google
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.
]]>The post NIST announces AI consortium to shape US policies appeared first on AI News.
]]>This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.
The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”
The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.
Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.
NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.
While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.
President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.
Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.
The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.
(Photo by Muhammad Rizki on Unsplash)
See also: UK paper highlights AI risks ahead of global Safety Summit
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post NIST announces AI consortium to shape US policies appeared first on AI News.
]]>The post Biden issues executive order to ensure responsible AI development appeared first on AI News.
]]>The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.
Key actions outlined in the order:
The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.
“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.
“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”
The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.
(Photo by David Everett Strickler on Unsplash)
See also: UK paper highlights AI risks ahead of global Safety Summit
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Biden issues executive order to ensure responsible AI development appeared first on AI News.
]]>The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.
]]>UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.
“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.
“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.
“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”
The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology.
The publication comprises three key sections:
The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.
Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.
“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”
The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.
Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.
Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.
However, some experts have questioned the UK Government’s approach.
Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.
“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.
The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.
Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.
“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”
If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.
The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.
(Image Credit: GOV.UK)
See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.
]]>The post White House secures safety commitments from eight more AI companies appeared first on AI News.
]]>Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.
The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.
The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:
The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.
They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.
The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.
Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.
To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.
They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.
These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.
The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.
The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.
(Photo by Tabrez Syed on Unsplash)
See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post White House secures safety commitments from eight more AI companies appeared first on AI News.
]]>