regulation Archives - AI News https://www.artificialintelligence-news.com/tag/regulation/ Artificial Intelligence News Thu, 30 May 2024 12:22:10 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png regulation Archives - AI News https://www.artificialintelligence-news.com/tag/regulation/ 32 32 EU launches office to implement AI Act and foster innovation https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/ https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/#respond Thu, 30 May 2024 12:22:08 +0000 https://www.artificialintelligence-news.com/?p=14903 The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment. The primary... Read more »

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment.

The primary goal of this office is to promote the “future development, deployment and use” of AI technologies, aiming to harness their societal and economic benefits while mitigating associated risks. By focusing on innovation and safety, the office seeks to position the EU as a global leader in AI regulation and development.

According to Margerthe Vertager, the EU competition chief, the new office will play a “key role” in implementing the AI Act, particularly with regard to general-purpose AI models. She stated, “Together with developers and a scientific community, the office will evaluate and test general-purpose AI to ensure that AI serves us as humans and upholds our European values.”

Sridhar Iyengar, Managing Director for Zoho Europe, welcomed the establishment of the AI office, noting, “The establishment of the AI office in the European Commission to play a key role with the implementation of the EU AI Act is a welcome sign of progress, and it is encouraging to see the EU positioning itself as a global leader in AI regulation. We hope to continue to see collaboration between governments, businesses, academics and industry experts to guide on safe use of AI to boost business growth.”

Iyengar highlighted the dual nature of AI’s impact on businesses, pointing out both its benefits and concerns. He emphasised the importance of adhering to best practice guidance and legislative guardrails to ensure safe and ethical AI adoption.

“AI can drive innovation in business tools, helping to improve fraud detection, forecasting, and customer data analysis to name a few. These benefits not only have the potential to elevate customer experience but can increase efficiency, present insights, and suggest actions to drive further success,” Iyengar said.

The office will be staffed by more than 140 individuals, including technology specialists, administrative assistants, lawyers, policy specialists, and economists. It will consist of various units focusing on regulation and compliance, as well as safety and innovation, reflecting the multifaceted approach needed to govern AI effectively.

Rachael Hays, Transformation Director for Definia, part of The IN Group, commented: “The establishment of a dedicated AI Office within the European Commission underscores the EU’s commitment to both innovation and regulation which is undoubtedly crucial in this rapidly evolving AI landscape.”

Hays also pointed out the potential for workforce upskilling that this initiative provides. She referenced findings from their Tech and the Boardroom research, which revealed that over half of boardroom leaders view AI as the biggest direct threat to their organisations.

“This initiative directly addresses these fears as employees across various sectors are given the opportunity to adapt and thrive in an AI-driven world. The AI Office offers promising hope and guidance in developing economic benefits while mitigating risks associated with AI technology, something we should all get on board with,” she added.

As the EU takes these steps towards comprehensive AI governance, the office’s work will be pivotal in driving forward both innovation and safety in the field.

(Photo by Sara Kurfeß)

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/feed/ 0
Igor Jablokov, Pryon: Building a responsible AI future https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/ https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/#respond Thu, 25 Apr 2024 14:13:22 +0000 https://www.artificialintelligence-news.com/?p=14743 As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus. In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on. Critical ethical challenges in AI... Read more »

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus.

In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on.

Critical ethical challenges in AI

“There’s not one, maybe there’s almost 20 plus of them,” Jablokov stated when asked about the most critical ethical challenges. He outlined a litany of potential pitfalls that must be carefully navigated—from AI hallucinations and emissions of falsehoods, to data privacy violations and intellectual property leaks from training on proprietary information.

Bias and adversarial content seeping into training data is another major worry, according to Jablokov. Security vulnerabilities like embedded agents and prompt injection attacks also rank highly on his list of concerns, as well as the extreme energy consumption and climate impact of large language models.

Pryon’s origins can be traced back to the earliest stirrings of modern AI over two decades ago. Jablokov previously led an advanced AI team at IBM where they designed a primitive version of what would later become Watson. “They didn’t greenlight it. And so, in my frustration, I departed, stood up our last company,” he recounted. That company, also called Pryon at the time, went on to become Amazon’s first AI-related acquisition, birthing what’s now Alexa.

The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.

A key element is offering enterprises flexibility and control over their data environments. “We give them choices in terms of how they’re consuming their platforms…from multi-tenant public cloud, to private cloud, to on-premises,” Jablokov said. This allows organisations to ring-fence highly sensitive data behind their own firewalls when needed.

Pryon also emphasises explainable AI and verifiable attribution of knowledge sources. “When our platform reveals an answer, you can tap it, and it always goes to the underlying page and highlights exactly where it learned a piece of information from,” Jablokov described. This allows human validation of the knowledge provenance.

In some realms like energy, manufacturing, and healthcare, Pryon has implemented human-in-the-loop oversight before AI-generated guidance goes to frontline workers. Jablokov pointed to one example where “supervisors can double-check the outcomes and essentially give it a badge of approval” before information reaches technicians.

Ensuring responsible AI development

Jablokov strongly advocates for new regulatory frameworks to ensure responsible AI development and deployment. While welcoming the White House’s recent executive order as a start, he expressed concerns about risks around generative AI like hallucinations, static training data, data leakage vulnerabilities, lack of access controls, copyright issues, and more.  

Pryon has been actively involved in these regulatory discussions. “We’re back-channelling to a mess of government agencies,” Jablokov said. “We’re taking an active hand in terms of contributing our perspectives on the regulatory environment as it rolls out…We’re showing up by expressing some of the risks associated with generative AI usage.”

On the potential for an uncontrolled, existential “AI risk” – as has been warned about by some AI leaders – Jablokov struck a relatively sanguine tone about Pryon’s governed approach: “We’ve always worked towards verifiable attribution…extracting out of enterprises’ own content so that they understand where the solutions are coming from, and then they decide whether they make a decision with it or not.”

The CEO firmly distanced Pryon’s mission from the emerging crop of open-ended conversational AI assistants, some of which have raised controversy around hallucinations and lacking ethical constraints.

“We’re not a clown college. Our stuff is designed to go into some of the more serious environments on planet Earth,” Jablokov stated bluntly. “I think none of you would feel comfortable ending up in an emergency room and having the medical practitioners there typing in queries into a ChatGPT, a Bing, a Bard…”

He emphasised the importance of subject matter expertise and emotional intelligence when it comes to high-stakes, real-world decision-making. “You want somebody that has hopefully many years of experience treating things similar to the ailment that you’re currently undergoing. And guess what? You like the fact that there is an emotional quality that they care about getting you better as well.”

At the upcoming AI & Big Data Expo, Pryon will unveil new enterprise use cases showcasing its platform across industries like energy, semiconductors, pharmaceuticals, and government. Jablokov teased that they will also reveal “different ways to consume the Pryon platform” beyond the end-to-end enterprise offering, including potentially lower-level access for developers.

As AI’s domain rapidly expands from narrow applications to more general capabilities, addressing the ethical risks will become only more critical. Pryon’s sustained focus on governance, verifiable knowledge sources, human oversight, and collaboration with regulators could offer a template for more responsible AI development across industries.

You can watch our full interview with Igor Jablokov below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/feed/ 0
UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0
UK announces over £100M to support ‘agile’ AI regulation https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/ https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/#respond Tue, 06 Feb 2024 11:56:31 +0000 https://www.artificialintelligence-news.com/?p=14327 The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education.  The investment comes at a vital time, as research from Thoughtworks shows... Read more »

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education. 

The investment comes at a vital time, as research from Thoughtworks shows 91% of British people argue that government regulations must do more to hold businesses accountable for their AI systems. The public wants more transparency, with 82% of consumers favouring businesses that proactively communicate how they are regulating general AI.

In a government response published today to last year’s AI Regulation White Paper consultation, the UK outlined its context-based approach to regulation that empowers existing regulators to address AI risks in a targeted way, while avoiding rushed legislation that could stifle innovation.

However, the government for the first time set out its thinking on potential future binding requirements for developers building advanced AI systems, to ensure accountability for safety – a measure 68% of the public said was needed in AI regulation. 

The response also revealed all key regulators will publish their approach to managing AI risks by 30 April, detailing their expertise and plans for the coming year. This aims to provide confidence to businesses and citizens on transparency. However, 30% still don’t think increased AI regulation is actually for their benefit, indicating scepticism remains.

Additionally, nearly £90 million was announced to launch nine new research hubs across the UK and a US partnership focused on responsible AI development. Separately, £2 million in funding will support projects defining responsible AI across sectors like policing – with 56% of the public wanting improved user education around AI.

Tom Whittaker, Senior Associate at independent UK law firm Burges Salmon, said: “The technology industry will welcome the large financial investment by the UK government to support regulators continuing what many see as an agile and sector-specific approach to AI regulation.

“The UK government is trying to position itself as pro-innovation for AI generally and across multiple sectors.  This is notable at a time when the EU is pushing ahead with its own significant AI legislation that the EU consider will boost trustworthy AI but which some consider a threat to innovation.”

Science Minister Michelle Donelan said the UK’s “innovative approach to AI regulation” has made it a leader in both AI safety and development. She said the agile, sector-specific approach allows the UK to “grip the risks immediately”, paving the way for it to reap AI’s benefits safely.

The wide-ranging funding and initiatives aim to cement the UK as a pioneer in safe AI innovation while assuaging public concerns. This builds on previous commitments like the £100 million AI Safety Institute to evaluate emerging models. 

Greg Hanson, GVP and Head of Sales EMEA North at Informatica, commented: “Undoubtedly, greater AI regulation is coming to the UK. And demand for this is escalating – especially considering half (52%) of UK businesses are already forging ahead with generative AI, above the global average of 45%.

“Yet with the adoption of AI, comes new challenges. Nearly all businesses in the UK who have adopted AI admit to having encountered roadblocks. In fact, 43% say AI governance is the main obstacle, closely followed by AI ethics (42%).”

Overall, the package of measures amounts to over £100 million of new funding towards the UK’s mission to lead on safe and responsible AI progress. This balances safely harnessing AI’s potential economic and societal benefits with a targeted approach to regulating very real risks.

(Photo by Rocco Dipoppa on Unsplash)

See also: Bank of England Governor: AI won’t lead to mass job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/feed/ 0
AI & Big Data Expo: Ethical AI integration and future trends https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/ https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/#respond Mon, 18 Dec 2023 16:10:52 +0000 https://www.artificialintelligence-news.com/?p=14111 Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as provide her insights around future trends.  Zheng first explained how over a decade working in digital marketing and e-commerce sparked... Read more »

The post AI & Big Data Expo: Ethical AI integration and future trends appeared first on AI News.

]]>
Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as provide her insights around future trends. 

Zheng first explained how over a decade working in digital marketing and e-commerce sparked her interest more recently in data analytics and artificial intelligence as machine learning has become hugely popular.

At Canon, Zheng’s team focuses on ethically integrating AI into business by first mapping current and potential AI applications across areas like marketing and e-commerce. They then analyse and assess risks to ensure compliance with regulations.

Canon is actively mapping out AI applications and assessing risks, as Grace explained, “to align with regulations such as the EU legislations.”

As founder of Kosh Duo, Zheng also provides coaching to help businesses scale up through the use of AI marketing and data-driven approaches. She coaches professionals on achieving greater recognition and rewards by leveraging AI tools as well.

A key challenge she encounters is misunderstandings around what AI truly means – many conflate it solely with chatbots like ChatGPT rather than appreciating the full breadth of machine learning, neural networks, natural language processing, and more that enable today’s AI.

“There’s a lot of misconceptions, definitely. One of the biggest fears, as I touched on, is the very generic understanding that GPT equals AI,” says Zheng. “[Kosh Duo] provides coaching services to businesses to scale to the next level using AI marketing and data-driven approaches.”

When asked about trends to watch, Zheng emphasised the need for continual learning given how rapidly the field evolves. She expects that 2024 will be an “awakening year” where businesses truly grasp AI’s potential and individuals appreciate the need to evaluate their current skillsets.

The interview highlighted the transformative but often misunderstood power of AI in business and the importance of developing specialised skills to properly harness it. Zheng stressed that with the right ethical foundations and coaching, AI and machine learning can become positive forces to drive growth rather than something to fear.

Watch our full interview with Grace Zheng below:

(Photo by Benjamin Davies on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Ethical AI integration and future trends appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/feed/ 0
MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework... Read more »

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/feed/ 0
UK reveals AI Safety Summit opening day agenda https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/ https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/#respond Mon, 16 Oct 2023 15:02:01 +0000 https://www.artificialintelligence-news.com/?p=13754 The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park. The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which... Read more »

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park.

The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which – if not developed responsibly – could pose significant risks.

The event aims to explore both the potential dangers emerging from rapid advances in AI and the transformative opportunities the technology presents, especially in education and international research collaborations.

Technology Secretary Michelle Donelan will lead the summit and articulate the government’s position that safety and security must be central to AI advancements. The event will feature parallel sessions in the first half of the day, delving into understanding frontier AI risks.

Other topics that will be covered during the AI Safety Summit include threats to national security, potential election disruption, erosion of social trust, and exacerbation of global inequalities.

The latter part of the day will focus on roundtable discussions aimed at enhancing frontier AI safety responsibly. Delegates will explore defining risk thresholds, effective safety assessments, and robust governance mechanisms to enable the safe scaling of frontier AI by developers.

International collaboration will be a key theme, emphasising the need for policymakers, scientists, and researchers to work together in managing risks and harnessing AI’s potential for global economic and social benefits.

The summit will conclude with a panel discussion on the transformative opportunities of AI for the public good, specifically in revolutionising education. Donelan will provide closing remarks and underline the importance of global collaboration in adopting AI safely.

This event aims to mark a positive step towards fostering international cooperation in the responsible development and deployment of AI technology. By convening global experts and policymakers, the UK Government wants to lead the conversation on creating a safe and positive future with AI.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK races to agree statement on AI risks with global leaders

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/feed/ 0
UK races to agree statement on AI risks with global leaders https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/ https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/#respond Tue, 10 Oct 2023 13:40:33 +0000 https://www.artificialintelligence-news.com/?p=13709 Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence.  This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park. The summit, designed to provide an update on White House-brokered... Read more »

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can scrutinise the most dangerous versions of this technology – faces a potential hurdle. It’s unlikely to generate an agreement on establishing a new international organisation to scrutinise cutting-edge AI, apart from its proposed communique.

The proposed AI Safety Institute, a brainchild of the UK government, aims to enable national security-related scrutiny of frontier AI models. However, this ambition might face disappointment if an international consensus is not reached.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“I think that this marks a very important moment for the UK, especially in terms of recognising that there are other players across Europe also hoping to catch up with the US in the AI space. It’s therefore essential that the UK continues to balance its drive for innovation with creating effective regulation that will not stifle the country’s growth prospects.

While the UK possesses the potential to be a frontrunner in the global tech race, concerted efforts are needed to strengthen the country’s position. By investing in research, securing supply chains, promoting collaboration, and nurturing local talent, the UK can position itself as a prominent player in shaping the future of AI-driven technologies.”

Currently, the UK stands as a key player in the global tech arena, with its AI market valued at over £16.9 billion and expected to soar to £803.7 billion by 2035, according to the US International Trade.

The British government’s commitment is evident through its £1 billion investment in supercomputing and AI research. Moreover, the introduction of seven new AI principles for regulation – focusing on accountability, access, diversity, choice, flexibility, fair dealing, and transparency – showcases the government’s dedication to fostering a robust AI ecosystem.

Despite these efforts, challenges loom as France emerges as a formidable competitor within Europe.

French billionaire Xavier Niel recently announced a €200 million investment in artificial intelligence, including a research lab and supercomputer, aimed at bolstering Europe’s competitiveness in the global AI race.

Niel’s initiative aligns with President Macron’s commitment, who announced €500 million in new funding at VivaTech to create new AI champions. Furthermore, France plans to attract companies through its own AI summit.

Claire Trachet acknowledges the intensifying competition between the UK and France, stating that while the rivalry adds complexity to the UK’s goals, it can also spur innovation within the industry. However, Trachet emphasises the importance of the UK striking a balance between innovation and effective regulation to sustain its growth prospects.

“In my view, if Europe wants to truly make a meaningful impact, they must leverage their collective resources, foster collaboration, and invest in nurturing a robust ecosystem,” adds Trachet.

“This means combining the strengths of the UK, France and Germany, to possibly create a compelling alternative in the next 10-15 years that disrupts the AI landscape, but again, this would require a heavily strategic vision and collaborative approach.”

(Photo by Nick Kane on Unsplash)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Looking to revamp your intelligent automation strategy? Learn more about the Intelligent Automation Event & Conference, to discover the latest insights surrounding unbiased algorithyms, future trends, RPA, Cognitive Automation and more!

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0