policy Archives - AI News https://www.artificialintelligence-news.com/tag/policy/ Artificial Intelligence News Mon, 03 Jun 2024 12:44:46 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png policy Archives - AI News https://www.artificialintelligence-news.com/tag/policy/ 32 32 X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework... Read more »

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/feed/ 0
NIST announces AI consortium to shape US policies https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/ https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/#respond Fri, 03 Nov 2023 10:13:14 +0000 https://www.artificialintelligence-news.com/?p=13831 In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.  This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.... Read more »

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.

Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.

While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.

President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.

Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.

The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.

(Photo by Muhammad Rizki on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/feed/ 0
UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/ https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/#respond Mon, 14 Aug 2023 09:52:34 +0000 https://www.artificialintelligence-news.com/?p=13466 Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet. Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide. in an interview with The... Read more »

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet.

Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide.

in an interview with The Times, Mr Dowden said: “This is a total revolution that is coming. It’s going to totally transform almost all elements of life over the coming years, and indeed, even months, in some cases.

“It is much faster than other revolutions that we’ve seen and much more extensive, whether that’s the invention of the internal combustion engine or the industrial revolution.”

Already making inroads into governmental processes, AI has been adopted for processing asylum claim applications within the UK’s Home Office. The potential for AI-driven automation also extends to reducing paperwork burdens in ministerial decision-making, ultimately enabling swifter and more efficient governance.

Sridhar Iyengar, Managing Director for Zoho Europe, commented:

“As AI continues to develop at a rapid pace, collaboration between government, business, and industry experts is needed to increase education and introduce regulations or guidelines which can guide its ethical use.

Only then can businesses confidently use AI in the right way and understand how to avoid any negative impact.”

While AI can expedite information analysis and facilitate decision-making, Dowden emphasised that the crucial task of making policy choices remains squarely within the human domain. He stressed that the objective is to utilise AI for tasks that it excels at – such as data collation – to facilitate informed decision-making by human leaders.

Discussing the broader economic implications of the AI revolution, Dowden likened the impending shift to the advent of the automobile. He recognised the potential for significant workforce upheaval and asserted that the government’s responsibility lies in aiding citizens’ transition as AI reshapes industries.

Sheila Flavell CBE, COO of FDM Group, explained:

“In order to truly maximise the potential of AI, the UK must prioritise a workforce of technically skilled staff capable of leading the development and deployment of AI to work alongside staff and make their day-to-day roles easier.

People such as graduates, ex-forces and returners are well-placed to play a central role in this workforce through education courses and training in AI, supporting businesses with this rapidly-evolving technology.”

Dowden acknowledged the inherent risks posed by AI’s exponential growth. He warned of the potential for AI to be exploited by malicious actors—ranging from terrorists using it to gain knowledge of dangerous materials, to conducting large-scale hacking operations. 

Referring to a recent breach that exposed the personal details of thousands of officers and staff from the Police Service of Northern Ireland, Dowden said the incident was an “industrial scale breach of data” that was made possible by AI.

Andy Ward, VP of International for Absolute Software, said:

“We are in the midst of an AI revolution and for all the business benefits that AI brings, however, we must also be wary of the potential cybersecurity concerns that come with any new technology.

AI can be used to positive effect when bolstering cyber defences, playing a role in threat detection through data and pattern analysis to identify certain attacks, but we have to acknowledge that malicious actors also have access to AI to increase the sophistication of their threats.“

While urging a measured response to potential AI-driven threats, Dowden emphasised the importance of addressing risks and vulnerabilities proactively. He stressed the need to strike a balance between harnessing AI’s immense potential for societal progress and ensuring that safeguards are in place to counter its misuse.

Earlier this year, the UK announced that it will host a global summit to address AI risks.

(Image Credit: UK Government under CC BY 2.0 license)

See also: Google report highlights AI’s impact on the UK economy

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/feed/ 0
GitHub CEO: The EU ‘will define how the world regulates AI’ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/#respond Mon, 06 Feb 2023 17:04:56 +0000 https://www.artificialintelligence-news.com/?p=12708 GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act.  “The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke. Dohmke was born and grew up in... Read more »

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act

“The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke.

Dohmke was born and grew up in Germany but now lives in the US. As such, he is all too aware of the widespread belief that the EU cannot lead when it comes to tech innovation.

“As a European, I love seeing how open-source AI innovations are beginning to break the narrative that only the US and China can lead on tech innovation.”

“I’ll be honest, as a European living in the United States, this is a pervasive – and often true – narrative. But this can change. And it’s already beginning to, thanks to open-source developers.”

AI will revolutionise just about every aspect of our lives. Regulation is vital to minimise the risks associated with AI while allowing the benefits to flourish.

“Together, OSS (Open Source Software) developers will use AI to help make our lives better. I have no doubt that OSS developers will help build AI innovations that empower those with disabilities, help us solve climate change, and save lives.”

A risk of overregulation is that it drives innovation elsewhere. Startups are more likely to establish themselves in countries like the US and China where they’re likely not subject to as strict regulations. Europe will find itself falling behind and having less influence on the global stage when it comes to AI.

“The AI Act is so crucial. This policy could well set the precedent for how the world regulates AI. It is foundationally important. Important for European technological leadership, and the future of the European economy itself. The AI Act must be fair and balanced for the open-source community.

“Policymakers should help us get there. The AI Act can foster democratised innovation and solidify Europe’s leadership in open, values-based artificial intelligence. That is why I believe that open-source developers should be exempt from the AI Act.”

In expanding on his belief that open-source developers should be exempt, Dohmke explains that the compliance burden should fall on those shipping products.

“OSS developers are often volunteers. Many are working two jobs. They are scientists, doctors, academics, professors, and university students alike. They don’t usually stand to profit from their contributions—and they certainly don’t have big budgets and compliance departments!”

EU lawmakers are hoping to agree on draft AI rules next month with the aim of winning the acceptance of member states by the end of the year.

“Open-source is forming the foundation of AI innovation in Europe. The US and China don’t have to win it all. Let’s break that narrative apart!

“Let’s give the open-source community the daylight and the clarity to grow their ideas and build them for the rest of the world! And by doing so, let’s give Europe the chance to be a leader in this new age of AI.”

GitHub’s policy paper on the AI Act can be found here.

(Image Credit: Collision Conf under CC BY 2.0 license)

Relevant: US and EU agree to collaborate on improving lives with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory... Read more »

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/feed/ 0
The UK is changing its data laws to boost its digital economy https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/ https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/#respond Thu, 26 Aug 2021 12:17:50 +0000 http://artificialintelligence-news.com/?p=10985 Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe. Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion... Read more »

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe.

Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers

“Now that we have left the EU, I’m determined to seize the opportunity by developing a world-leading data policy that will deliver a Brexit dividend for individuals and businesses across the UK,” said Dowden.

When GDPR came into effect, it received its fair share of both praise and criticism.  On the one hand, GDPR admirably sought to protect the data of consumers. On the other, “pointless” cookie popups, extra paperwork, and concerns about hefty fines have caused frustration and led many businesses to pack their bags and take their jobs, innovation, and services to less strict regimes.

GDPR is just one example. Another would be Article 11 and 13 of the EU Copyright Directive that some – including the inventor of the World Wide Web Sir Tim Berners-Lee, and Wikipedia founder Jimmy Wales – have opposed as being an “upload filter”, “link tax”, and “meme killer”. This blog post from YouTube explained why creators should care about Europe’s increasingly strict laws.

Mr Dowden said the new reforms would be “based on common sense, not box-ticking” but uphold the necessary safeguards to protect people’s privacy.

What will the impact be on the UK’s AI industry?

AI is, of course, powered by data—masses of it. The idea of mass data collection terrifies many people but is harmless so long as it’s truly anonymised. Arguably, it’s a lack of data that should be more concerning as biases in many algorithms today are largely due to limited datasets that don’t represent the full diversity of our societies.

Western facial recognition algorithms, for example, have far more false positives against minorities than they do white men—leading to automated racial profiling. A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians.

However, the data must be collected responsibly and checked as thoroughly as possible. Last year, MIT was forced to take offline a popular dataset called 80 Million Tiny Images that was created in 2008 to train AIs to detect objects after discovering that images were labelled with misogynistic and racist terms.

While a European leader in AI, few people are under any illusion that the UK could become a world leader in pure innovation and deployment because it’s simply unable to match the funding and resources available to powers like the US and China. Instead, experts believe the UK should build on its academic and diplomatic strengths to set the “gold standard” in ethical artificial intelligence.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light touch a way as possible,” Mr Dowden said.

As it diverges from the EU’s laws in the first major regulatory shakeup since Brexit, the UK needs to show it can strike a fair balance between the EU’s strict regime and the arguably too lax protections in many other countries.

The UK also needs to promote and support innovation while avoiding the “Singapore-on-Thames”-style model of a race to the bottom in standards, rights, and taxes that many Remain campaigners feared would happen if the country left the EU. Similarly, it needs to prove that “Global Britain” is more than just a soundbite.

To that end, Britain’s data watchdog is getting a shakeup and John Edwards, New Zealand’s current privacy commissioner, will head up the regulator.

“It is a great honour and responsibility to be considered for appointment to this key role as a watchdog for the information rights of the people of the United Kingdom,” said Edwards.

“There is a great opportunity to build on the wonderful work already done and I look forward to the challenge of steering the organisation and the British economy into a position of international leadership in the safe and trusted use of data for the benefit of all.”

The UK is also seeking global data partnerships with six countries: the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre, and Colombia. Over the long-term, agreements with fast-growing markets like India and Brazil are hoped to be striked to facilitate data flows in scientific research, law enforcement, and more.

Commenting on the UK’s global data plans Andrew Dyson, Global Co-Chair of DLA Piper’s Data Protection, Privacy and Security Group, said:

“The announcements are the first evidence of the UK’s vision to establish a bold new regulatory landscape for digital Britain post-Brexit. Earlier in the year, the UK and EU formally recognised each other’s data protection regimes—that allowed data to continue to flow freely after Brexit.

This announcement shows how the UK will start defining its own future regulatory pathways from here, with an expansion of digital trade a clear driver if you look at the willingness to consider potential recognition of data transfers to Australia, Singapore, India and the USA.

It will be interesting to see the further announcements that are sure to follow on reforms to the wider policy landscape that are just hinted at here, and of course the changes in oversight we can expect from a new Information Commissioner.”

An increasingly punitive EU is not likely to react kindly to the news and added clauses into the recent deal reached with the UK to avoid the country diverging too far from its own standards.

Mr Dowden, however, said there was “no reason” the EU should react with too much animosity as the bloc has reached data agreements with many countries outside of its regulatory orbit and the UK must be free to “set our own path”.

(Photo by Massimiliano Morosinotto on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/feed/ 0
Going for gold: Britain can set the standard in ethical AI https://www.artificialintelligence-news.com/2021/08/05/going-for-gold-britain-can-set-the-standard-in-ethical-ai/ https://www.artificialintelligence-news.com/2021/08/05/going-for-gold-britain-can-set-the-standard-in-ethical-ai/#respond Thu, 05 Aug 2021 09:59:34 +0000 http://artificialintelligence-news.com/?p=10830 A study by BCS, The Chartered Institute for IT has found the UK can set the “gold standard” in ethical artificial intelligence. The UK – home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI, and others – is Europe’s leader in AI. However, the country is unable to match the funding and support available to... Read more »

The post Going for gold: Britain can set the standard in ethical AI appeared first on AI News.

]]>
A study by BCS, The Chartered Institute for IT has found the UK can set the “gold standard” in ethical artificial intelligence.

The UK – home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI, and others – is Europe’s leader in AI. However, the country is unable to match the funding and support available to counterparts residing in countries like the US and China.

Many experts have instead suggested that the UK should tap its strengths in leading universities and institutions, diplomacy, and democratic values to become a world leader in creating AI that cares about humanity.

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT and a lead author of the report, said:

“The UK should set the ‘gold standard’ for professional and ethical AI, as a critical part of our economic recovery.

We all deserve to have understanding, and confidence in, AI, as it affects our lives over the coming years. To get there, the profession should be known as a go-to place for men and women from a diverse range of backgrounds, who reflect the needs of everyone they are engineering software for.

That might be credit scoring apps, cancer diagnoses based on training data, or software that decides if you get a job interview or not.”

Current biases in many AI systems could lead to increasing existing societal problems including the wealth gap and discrimination based on race, gender, sexual orientation, age, and more.

“It’s about developing a highly-skilled, ethical, and diverse workforce – and a political class –  that understands AI well enough to deliver the right solutions for society,” explains Mitchell.

“That will take strong leadership from the government and access to digital skills training across the board.”

Public trust in AI has been damaged through high-profile missteps including the crisis last summer when an algorithm was used to estimate the grades of students. A follow-up survey from YouGov – commissioned by BCS – found that 53 percent of UK adults had no faith in any organisation to make judgements about them.

(Credit: BCS)

In May last year, the national press reported that code written by Professor Neil Ferguson and his team at Imperial College London that informed the decision to enter a lockdown was “totally unreliable” and also damaged public trust in software. Since then, articles in science journal Nature have proved Professor Ferguson’s epidemiological computer code to be fit for purpose. From hindsight, people should now know this—but most people don’t read Nature and still believe the reports in the national press that the code was flawed.

The report found a large disparity in the competence and ethical practices of organisations using AI. One of the suggestions in the report is for the government to create a framework of standards to meet for the adoption of AI across both the public and private sectors.

In the UK government’s National Data Strategy, it states: “Used badly, data could harm people or communities, or have its overwhelming benefits overshadowed by public mistrust.”

BCS’ report, Priorities For The National AI Strategy, builds on the work of the AI Council Roadmap and National Data strategy. It has been published to complement the final version of the UK government’s plan, due to be released in a final version later this year.

A full copy of BCS’ report can be found here (PDF)

(Photo by Ethan Wilkinson on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Going for gold: Britain can set the standard in ethical AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/05/going-for-gold-britain-can-set-the-standard-in-ethical-ai/feed/ 0
Aussie court rules AIs can be credited as inventors under patent law https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/ https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/#respond Tue, 03 Aug 2021 16:10:43 +0000 http://artificialintelligence-news.com/?p=10821 A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent. Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia... Read more »

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent.

Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia – on behalf of US-based Dr Stephen Thaler.

The twist here is that it’s not Thaler which Abbott is attempting to credit as an inventor, but rather his AI device known as DABUS.

“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” said justice Jonathan Beach, overturning Australia’s original verdict. “We are both created and create. Why cannot our own creations also create?”

DABUS consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

Until now, all of the patent applications were rejected—including in Australia. Each country determined that a human must be the credited inventor.

Whether AIs should be afforded certain “rights” similar to humans is a key debate, and one that is increasingly in need of answers. This patent case could be the first step towards establishing when machines – with increasing forms of sentience – should be treated like humans.

DABUS was awarded its first patent for “a food container based on fractal geometry,” by South Africa’s Companies and Intellectual Property Commission on June 24.

Following the patent award, Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, commented:

“This is a truly historic case that recognises the need to change how we attribute invention. We are moving from an age in which invention was the preserve of people to an era where machines are capable of realising the inventive step, unleashing the potential of AI-generated inventions for the benefit of society.

The School of Law at the University of Surrey has taken a leading role in asking important philosophical questions such as whether innovation can only be a human phenomenon, and what happens legally when AI behaves like a person.”

AI News reached out to the patent experts at ACT | The App Association, which represents more than 5,000 app makers and connected device companies around the world, for their perspective.

Brian Scarpelli, Senior Global Policy Counsel at ACT | The App Association, commented:

“The App Association, in alignment with the plain language of patent laws across key jurisdictions (including Australia’s 1990 Patents Act), is opposed to the proposal that a patent may be granted for an invention devised by a machine, rather than by a natural person.

Today’s patent laws can, for certain kinds of AI inventions, appropriately support inventorship. Patent offices can use the existing requirements for software patentability as a starting point to identify necessary elements of patentable AI inventions and applications – for example for AI technology that is used to improve machine capability, where it can be delineated, declared, and evaluated in a way equivalent to software inventions.

But more generally, determinations regarding when and by whom inventorship and authorship, autonomously created by AI, could represent a drastic shift in law and policy. This would have direct implications on policy questions raised about allowing patents on inventions made by machines further public policy goals, and even reaching into broader definitions of AI personhood.

Continued study, both by national/regional patent offices and multilateral fora like the World Intellectual Property Office, is going to be critical and needs to continue to inform a comprehensive debate by policymakers.”

Feel free to let us know in the comments whether you believe AI systems should have similar legal protections and obligations to humans.

(Photo by Trollinho on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/feed/ 0
CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/ https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/#comments Fri, 05 Mar 2021 09:47:47 +0000 http://artificialintelligence-news.com/?p=10365 Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed. CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the... Read more »

The post CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust appeared first on AI News.

]]>
Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed.

CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the advisory body polled over 12,000 people to gauge sentiment around how such technologies are being used.

Edwina Dunn, Deputy Chair for the CDEI, said:

“Data-driven technologies including AI have great potential for our economy and society. We need to ensure that the right governance regime is in place if we are to unlock the opportunities that these technologies present.

The CDEI will be playing its part to ensure that the UK is developing governance approaches that the public can have confidence in.”

Close to three quarters (72%) of respondents expressed confidence in digital technology having the potential to help tackle the pandemic—a belief shared across all demographics.

A majority (~69%) also support, in principle, the use of technologies such as wearables to assist with social distancing in the workplace.

Wearables haven’t yet been used to help counter the spread of coronavirus. The most widely deployed technology is the contact-tracing app, but its effectiveness has often come into question.

Many people feel data-driven technologies are not being used to their full potential. Under half (42%) believe digital technology is improving the situation in the UK. Seven percent even think current technologies are making the situation worse.

The scepticism expressed about the use of digital technologies in tackling the pandemic is less about the technology itself – with just 17 percent of respondents expressing that view – and more a lack of faith in whether it will be used by people and organisations properly (39%).

John Whittingdale, Minister of State for Media and Data at the Department for Digital, Culture, Media and Sport, commented:

“We are determined to build back better and capitalise on all we have learnt from the pandemic, which has forced us to share data quickly, efficiently and responsibly for the public good. This research confirms that public trust in how we govern data is essential. 

Through our National Data Strategy, we have committed to unlocking the huge potential of data to tackle some of society’s greatest challenges, while maintaining our high standards of data protection and governance.”

When controlling for all other variables, the CDEI found that “trust that the right rules and regulations are in place” is the single biggest predictor of whether someone will support the use of digital technology.

Among the key ways to help improve public trust is by increasing transparency and accountability. Less than half (45%) of respondents know where to raise concerns if they feel digital technology is causing harm.

CDEI’s research highlighted that people, on the whole, believe data-driven technologies can help tackle the pandemic. However, work needs to be done to improve trust in how such technologies are deployed and managed.

(Photo by Mangopear creative on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/feed/ 1