Latest AI Legislation & Government News | AI News https://www.artificialintelligence-news.com/categories/ai-legislation-government/ Artificial Intelligence News Thu, 06 Jun 2024 15:39:56 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Latest AI Legislation & Government News | AI News https://www.artificialintelligence-news.com/categories/ai-legislation-government/ 32 32 AI pioneers turn whistleblowers and demand safeguards https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/ https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/#respond Thu, 06 Jun 2024 15:39:54 +0000 https://www.artificialintelligence-news.com/?p=14962 OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came... Read more »

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

  1. That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.
  2. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.
  3. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.
  4. That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

See also: OpenAI disrupts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/feed/ 0
EU launches office to implement AI Act and foster innovation https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/ https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/#respond Thu, 30 May 2024 12:22:08 +0000 https://www.artificialintelligence-news.com/?p=14903 The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment. The primary... Read more »

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment.

The primary goal of this office is to promote the “future development, deployment and use” of AI technologies, aiming to harness their societal and economic benefits while mitigating associated risks. By focusing on innovation and safety, the office seeks to position the EU as a global leader in AI regulation and development.

According to Margerthe Vertager, the EU competition chief, the new office will play a “key role” in implementing the AI Act, particularly with regard to general-purpose AI models. She stated, “Together with developers and a scientific community, the office will evaluate and test general-purpose AI to ensure that AI serves us as humans and upholds our European values.”

Sridhar Iyengar, Managing Director for Zoho Europe, welcomed the establishment of the AI office, noting, “The establishment of the AI office in the European Commission to play a key role with the implementation of the EU AI Act is a welcome sign of progress, and it is encouraging to see the EU positioning itself as a global leader in AI regulation. We hope to continue to see collaboration between governments, businesses, academics and industry experts to guide on safe use of AI to boost business growth.”

Iyengar highlighted the dual nature of AI’s impact on businesses, pointing out both its benefits and concerns. He emphasised the importance of adhering to best practice guidance and legislative guardrails to ensure safe and ethical AI adoption.

“AI can drive innovation in business tools, helping to improve fraud detection, forecasting, and customer data analysis to name a few. These benefits not only have the potential to elevate customer experience but can increase efficiency, present insights, and suggest actions to drive further success,” Iyengar said.

The office will be staffed by more than 140 individuals, including technology specialists, administrative assistants, lawyers, policy specialists, and economists. It will consist of various units focusing on regulation and compliance, as well as safety and innovation, reflecting the multifaceted approach needed to govern AI effectively.

Rachael Hays, Transformation Director for Definia, part of The IN Group, commented: “The establishment of a dedicated AI Office within the European Commission underscores the EU’s commitment to both innovation and regulation which is undoubtedly crucial in this rapidly evolving AI landscape.”

Hays also pointed out the potential for workforce upskilling that this initiative provides. She referenced findings from their Tech and the Boardroom research, which revealed that over half of boardroom leaders view AI as the biggest direct threat to their organisations.

“This initiative directly addresses these fears as employees across various sectors are given the opportunity to adapt and thrive in an AI-driven world. The AI Office offers promising hope and guidance in developing economic benefits while mitigating risks associated with AI technology, something we should all get on board with,” she added.

As the EU takes these steps towards comprehensive AI governance, the office’s work will be pivotal in driving forward both innovation and safety in the field.

(Photo by Sara Kurfeß)

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
Igor Jablokov, Pryon: Building a responsible AI future https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/ https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/#respond Thu, 25 Apr 2024 14:13:22 +0000 https://www.artificialintelligence-news.com/?p=14743 As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus. In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on. Critical ethical challenges in AI... Read more »

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus.

In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on.

Critical ethical challenges in AI

“There’s not one, maybe there’s almost 20 plus of them,” Jablokov stated when asked about the most critical ethical challenges. He outlined a litany of potential pitfalls that must be carefully navigated—from AI hallucinations and emissions of falsehoods, to data privacy violations and intellectual property leaks from training on proprietary information.

Bias and adversarial content seeping into training data is another major worry, according to Jablokov. Security vulnerabilities like embedded agents and prompt injection attacks also rank highly on his list of concerns, as well as the extreme energy consumption and climate impact of large language models.

Pryon’s origins can be traced back to the earliest stirrings of modern AI over two decades ago. Jablokov previously led an advanced AI team at IBM where they designed a primitive version of what would later become Watson. “They didn’t greenlight it. And so, in my frustration, I departed, stood up our last company,” he recounted. That company, also called Pryon at the time, went on to become Amazon’s first AI-related acquisition, birthing what’s now Alexa.

The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.

A key element is offering enterprises flexibility and control over their data environments. “We give them choices in terms of how they’re consuming their platforms…from multi-tenant public cloud, to private cloud, to on-premises,” Jablokov said. This allows organisations to ring-fence highly sensitive data behind their own firewalls when needed.

Pryon also emphasises explainable AI and verifiable attribution of knowledge sources. “When our platform reveals an answer, you can tap it, and it always goes to the underlying page and highlights exactly where it learned a piece of information from,” Jablokov described. This allows human validation of the knowledge provenance.

In some realms like energy, manufacturing, and healthcare, Pryon has implemented human-in-the-loop oversight before AI-generated guidance goes to frontline workers. Jablokov pointed to one example where “supervisors can double-check the outcomes and essentially give it a badge of approval” before information reaches technicians.

Ensuring responsible AI development

Jablokov strongly advocates for new regulatory frameworks to ensure responsible AI development and deployment. While welcoming the White House’s recent executive order as a start, he expressed concerns about risks around generative AI like hallucinations, static training data, data leakage vulnerabilities, lack of access controls, copyright issues, and more.  

Pryon has been actively involved in these regulatory discussions. “We’re back-channelling to a mess of government agencies,” Jablokov said. “We’re taking an active hand in terms of contributing our perspectives on the regulatory environment as it rolls out…We’re showing up by expressing some of the risks associated with generative AI usage.”

On the potential for an uncontrolled, existential “AI risk” – as has been warned about by some AI leaders – Jablokov struck a relatively sanguine tone about Pryon’s governed approach: “We’ve always worked towards verifiable attribution…extracting out of enterprises’ own content so that they understand where the solutions are coming from, and then they decide whether they make a decision with it or not.”

The CEO firmly distanced Pryon’s mission from the emerging crop of open-ended conversational AI assistants, some of which have raised controversy around hallucinations and lacking ethical constraints.

“We’re not a clown college. Our stuff is designed to go into some of the more serious environments on planet Earth,” Jablokov stated bluntly. “I think none of you would feel comfortable ending up in an emergency room and having the medical practitioners there typing in queries into a ChatGPT, a Bing, a Bard…”

He emphasised the importance of subject matter expertise and emotional intelligence when it comes to high-stakes, real-world decision-making. “You want somebody that has hopefully many years of experience treating things similar to the ailment that you’re currently undergoing. And guess what? You like the fact that there is an emotional quality that they care about getting you better as well.”

At the upcoming AI & Big Data Expo, Pryon will unveil new enterprise use cases showcasing its platform across industries like energy, semiconductors, pharmaceuticals, and government. Jablokov teased that they will also reveal “different ways to consume the Pryon platform” beyond the end-to-end enterprise offering, including potentially lower-level access for developers.

As AI’s domain rapidly expands from narrow applications to more general capabilities, addressing the ethical risks will become only more critical. Pryon’s sustained focus on governance, verifiable knowledge sources, human oversight, and collaboration with regulators could offer a template for more responsible AI development across industries.

You can watch our full interview with Igor Jablokov below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/feed/ 0
UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0
US and Japan announce sweeping AI and tech collaboration https://www.artificialintelligence-news.com/2024/04/11/us-and-japan-sweeping-ai-tech-collaboration/ https://www.artificialintelligence-news.com/2024/04/11/us-and-japan-sweeping-ai-tech-collaboration/#respond Thu, 11 Apr 2024 10:38:04 +0000 https://www.artificialintelligence-news.com/?p=14674 The US and Japan have unveiled a raft of new AI, quantum computing, semiconductors, and other critical technology initiatives. The ambitious plans were announced this week by President Biden and Japanese Prime Minister Kishida Fumio following Kishida’s Official Visit to the White House. While the leaders affirmed their commitment across a broad range of areas... Read more »

The post US and Japan announce sweeping AI and tech collaboration appeared first on AI News.

]]>
The US and Japan have unveiled a raft of new AI, quantum computing, semiconductors, and other critical technology initiatives.

The ambitious plans were announced this week by President Biden and Japanese Prime Minister Kishida Fumio following Kishida’s Official Visit to the White House.

While the leaders affirmed their commitment across a broad range of areas including defence, climate, development, and humanitarian efforts, the new technology collaborations took centre stage and underscore how the US-Japan alliance is evolving into a comprehensive global partnership underpinned by innovation.

AI takes centre stage

One of the headline initiatives is a $110 million partnership between the University of Washington, University of Tsukuba, Carnegie Mellon University, and Keio University. Backed by tech giants like NVIDIA, Arm, Amazon, and Microsoft—as well as Japanese companies—the program aims to solidify US-Japan leadership in cutting-edge AI research and development.

The US and Japan also committed to supporting each other in establishing national AI Safety Institutes and pledged future collaboration on interoperable AI safety standards, evaluations, and risk management frameworks.

In a bid to mitigate AI risks, the countries vowed to provide transparency around AI-generated and manipulated content from official government channels. Technical research and standards efforts were promised to identify and authenticate synthetic media.

Quantum leaps

Quantum technology featured prominently, with the US National Institute of Standards and Technology (NIST) partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to build robust quantum supply chains.

Trilateral cooperation between the University of Chicago, University of Tokyo, and Seoul National University was also announced to train a quantum workforce and bolster competitiveness.  

The US and Japan additionally welcomed new commercial deals including Quantinuum providing Japan’s RIKEN institute with $50 million in quantum computing services over five years.

Several semiconductor initiatives were unveiled such as potential cooperation between Japan’s Leading-edge Semiconductor Technology Center (LSTC) with the US National Semiconductor Technology Center and National Advanced Packaging Manufacturing Program. The countries pledged to explore joint semiconductor workforce development initiatives through technical workshops.

Other announced commercial deals spanned cloud computing, telecommunications, batteries, robotics, biotechnology, finance, transportation and beyond—highlighting how the alliance is fusing public and private efforts.

Developing humans

Initiatives around STEM education exchanges, technology curriculums, entrepreneur programs, and talent circulation efforts emphasised the focus on developing human capital to power the coming wave of digital innovation.

While the technological breakthroughs grab attention, the proliferation of initiatives aimed at training, exchanging, and nurturing the innovators, researchers, and professionals across these domains could prove just as vital. The US and Japan appear determined to strategically develop and leverage human resources in lockstep with their efforts to establish cutting-edge AI, quantum, chip, and other advanced tech capabilities.

Both nations clearly recognise that building complementary ecosystems across vital technologies is essential to bolstering competitiveness, economic prosperity, and national security in an era of intensifying strategic competition.

(Photo by Tong Su)

See also: Microsoft AI opens London hub to access ‘enormous pool’ of talent

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US and Japan announce sweeping AI and tech collaboration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/11/us-and-japan-sweeping-ai-tech-collaboration/feed/ 0
UK and US sign pact to develop AI safety tests https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/ https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/#respond Tue, 02 Apr 2024 10:17:09 +0000 https://www.artificialintelligence-news.com/?p=14628 The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments. The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the... Read more »

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments.

The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models, systems, and agents.

Under the deal, the UK’s new AI Safety Institute and the upcoming US organisation will exchange research expertise with the aim of mitigating AI risks, including how to independently evaluate private AI models from companies such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” stated Donelan. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The partnership follows through on commitments made at the AI Safety Summit hosted in the UK last November. The institutes plan to build a common approach to AI safety testing and share capabilities to tackle risks effectively. They intend to conduct at least one joint public testing exercise on an openly accessible AI model and explore personnel exchanges.

Raimondo emphasised the significance of the collaboration, stating: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”

Both governments recognise AI’s rapid development and the urgent need for a shared global approach to safety that can keep pace with emerging risks. The partnership takes effect immediately, allowing seamless cooperation between the organisations.

“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future,” added Raimondo.

In addition to joint testing and capability sharing, the UK and US will exchange vital information about AI model capabilities, risks, and fundamental technical research. This aims to underpin a common scientific foundation for AI safety testing that can be adopted by researchers worldwide.

Despite the focus on risk, Donelan insisted the UK has no plans to regulate AI more broadly in the short term. In contrast, President Joe Biden has taken a stricter position on AI models that threaten national security, and the EU AI Act has adopted tougher regulations.

Industry experts welcomed the collaboration as essential for promoting trust and safety in AI development and adoption across sectors like marketing, finance, and customer service.

“Ensuring AI’s development and use are governed by trust and safety is paramount,” said Ramprakash Ramamoorthy of Zoho. “Taking safeguards to protect training data mitigates risks and bolsters confidence among those deploying AI solutions.”

Dr Henry Balani of Encompass added: “Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration, and supporting innovation in a crucial, advancing area of technology.”

(Photo by Art Lasovsky)

See also: IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and US sign pact to develop AI safety tests appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/02/uk-and-us-sign-pact-develop-ai-safety-tests/feed/ 0
IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/ https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/#respond Wed, 27 Mar 2024 10:37:59 +0000 https://www.artificialintelligence-news.com/?p=14619 A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted. The report identifies two key stages of generative... Read more »

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like database management and organisational tasks like scheduling are most at risk. 

However, in a potential second wave, AI could handle a staggering 59 percent of tasks—impacting higher-earning jobs and non-routine cognitive work like creating databases.

Bhargav Srinivasa Desikan, Senior Research Fellow at IPPR, said: “We could see jobs such as copywriters, graphic designers, and personal assistants roles being heavily affected by AI. The question is how we can steer technological change in a way that allows for novel job opportunities, increased productivity, and economic benefits for all.”

“We are at a sliding doors moment, and policy makers urgently need to develop a strategy to make sure our labour market adapts to the 21st century, without leaving millions behind. It is crucial that all workers benefit from these technological advancements, and not just the big tech corporations.”

IPPR modelled three scenarios for the second wave’s impact:

  • Worst case: 7.9 million jobs lost with no GDP gains
  • Central case: 4.4 million jobs lost but 6.3 percent GDP growth (£144bn/year) 
  • Best case: No jobs lost and 13 percent GDP boost (£306bn/year) from augmenting at-risk jobs

IPPR warns the worst-case displacement is possible without government intervention, urging a “job-centric” AI strategy with fiscal incentives, regulation ensuring human oversight, and support for green jobs less exposed to automation.

The analysis underscores the disproportionate impact on certain demographics, with women and young people bearing the brunt of job displacement. Entry-level positions, predominantly occupied by these groups, face the gravest jeopardy as AI encroaches on roles such as secretarial and customer service positions.

Carsten Jung, Senior Economist at IPPR, said: “History shows that technological transition can be a boon if well managed, or can end in disruption if left to unfold without controls. Indeed, some occupations could be hard hit by generative AI, starting with back office jobs.

“But technology isn’t destiny and a jobs apocalypse is not inevitable – government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. If they don’t act soon, it may be too late.”

A full copy of the report can be found here (PDF)

(Photo by Cullan Smith)

See also: Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/feed/ 0
UN passes first global AI resolution https://www.artificialintelligence-news.com/2024/03/22/un-passes-first-global-ai-resolution/ https://www.artificialintelligence-news.com/2024/03/22/un-passes-first-global-ai-resolution/#respond Fri, 22 Mar 2024 16:52:30 +0000 https://www.artificialintelligence-news.com/?p=14598 The UN General Assembly has adopted a landmark resolution on AI, aiming to promote the safe and ethical development of AI technologies worldwide. The resolution, co-sponsored by over 120 countries, was adopted unanimously by all 193 UN member states on 21 March. This marks the first time the UN has established global standards and guidelines... Read more »

The post UN passes first global AI resolution appeared first on AI News.

]]>
The UN General Assembly has adopted a landmark resolution on AI, aiming to promote the safe and ethical development of AI technologies worldwide.

The resolution, co-sponsored by over 120 countries, was adopted unanimously by all 193 UN member states on 21 March. This marks the first time the UN has established global standards and guidelines for AI.

The eight-page resolution calls for the development of “safe, secure, and trustworthy” AI systems that respect human rights and fundamental freedoms. It urges member states and stakeholders to refrain from deploying AI inconsistent with international human rights laws.

Key aspects of the resolution include:

  • Raising public awareness about AI’s benefits and risks
  • Strengthening investments and capabilities in AI research and development  
  • Safeguarding privacy and ensuring transparency in AI systems
  • Addressing diversity and bias issues in AI datasets and algorithms

The resolution also encourages governments to develop national policies, safeguards, and standards for ethical AI development and use. It calls on UN agencies to provide technical assistance to countries in need.

“The resolution adopted today lays out a comprehensive vision for how countries should respond to the opportunities and challenges of AI,” said Jake Sullivan, US National Security Advisor.

“It lays out a path for international cooperation on AI, including to promote equitable access, take steps to manage the risks of AI, protect privacy, guard against misuse, prevent exacerbated bias and discrimination.”

Growing international efforts to regulate AI  

The UN resolution follows several international efforts to regulate the rapidly growing AI industry over ethics and security concerns.

The European Union recently approved the AI Act to set risk-based rules for AI across the 27-nation bloc. Investigations into potential antitrust issues around AI have also been launched against major tech companies.

In the US, President Biden signed an executive order last year initiating a national AI strategy with a focus on safety and security.

As AI capabilities advance, the UN resolution signals a global commitment to ensure the technology’s development aligns with ethical principles and benefits humanity as a whole.

“Developed in consultation with civil society and private sector experts, the resolution squarely addresses the priorities of many developing countries, such as encouraging AI capacity building and harnessing the technology to advance sustainable development,” explained Sullivan.

“Critically, the resolution makes clear that protecting human rights and fundamental freedoms must be central to the development and use of AI systems.”

The full text of the UN resolution can be found here.

(Photo by Ilyass SEDDOUG)

See also: NVIDIA unveils Blackwell architecture to power next GenAI wave 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UN passes first global AI resolution appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/22/un-passes-first-global-ai-resolution/feed/ 0
UAE set to help fund OpenAI’s in-house chips https://www.artificialintelligence-news.com/2024/03/15/uae-set-help-fund-openai-in-house-chips/ https://www.artificialintelligence-news.com/2024/03/15/uae-set-help-fund-openai-in-house-chips/#respond Fri, 15 Mar 2024 16:21:50 +0000 https://www.artificialintelligence-news.com/?p=14550 OpenAI’s ambitious plans to develop its own semiconductor chips for powering advanced AI models could receive a boost from the United Arab Emirates (UAE), according to a report by the Financial Times. The report states that MGX — a state-backed group in Abu Dhabi — is in discussions to support OpenAI’s venture to build AI... Read more »

The post UAE set to help fund OpenAI’s in-house chips appeared first on AI News.

]]>
OpenAI’s ambitious plans to develop its own semiconductor chips for powering advanced AI models could receive a boost from the United Arab Emirates (UAE), according to a report by the Financial Times.

The report states that MGX — a state-backed group in Abu Dhabi — is in discussions to support OpenAI’s venture to build AI chips in-house. This information comes from two individuals with knowledge of the discussions.

In order to achieve its goal of creating semiconductor chips internally, OpenAI is reportedly seeking investments worth trillions of dollars from investors worldwide. By manufacturing its own chips, the San Francisco-based company aims to reduce its reliance on Nvidia, the current global leader in semiconductor chip technology.

As part of its funding efforts, OpenAI struck a deal with Thrive Capital in February 2023, which reportedly increased the company’s valuation to more than $80 billion, marking an almost threefold increase in under 10 months.

This comes as the UK semiconductor sector gains enhanced access to research funding through the country’s participation in the EU’s ‘Chips Joint Undertaking’.

The UK’s participation in the Chips Joint Undertaking provides the British semiconductor sector with enhanced access to a €1.3 billion pot of funds set aside from Horizon Europe to support research in semiconductor technologies up to 2027. The move is backed by an initial £5 million from the UK government this year, with an additional £30 million due to support UK participation in further research between 2025 and 2027.

“Our membership of the Chips Joint Undertaking will boost Britain’s strengths in semiconductor science and research to secure our position in the global chip supply chain,” said Technology Minister Saqib Bhatti. “This underscores our unwavering commitment to pushing the boundaries of technology and cements our important role in shaping the future of semiconductor technologies around the world.”

Back in the UAE, MGX — the group behind the potential investment in OpenAI — is an AI-focused fund launched earlier this week and headed by the UAE’s national security adviser, Sheikh Tahnoon Bin Zayed al-Nahyan. The fund was established in collaboration with G42 and Mubadala, with G42 having already entered into a partnership with OpenAI in October 2023 as part of the company’s Middle East expansion.

During the G42 partnership deal, OpenAI CEO Sam Altman stated that they plan to bring AI solutions to the Middle East that “resonate with the nuances of the region.”

One of the sources briefed on the MGX fund emphasised, “They’re looking at creating a structure that will put Abu Dhabi at the centre of this AI strategy with global partners around the world.”

As the race to develop cutting-edge semiconductor technologies intensifies, both the UAE and the UK-EU are positioning themselves as key players.

(Photo by Wael Hneini on Unsplash)

See also: EU approves controversial AI Act to mixed reactions

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UAE set to help fund OpenAI’s in-house chips appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/15/uae-set-help-fund-openai-in-house-chips/feed/ 0