bias Archives - AI News https://www.artificialintelligence-news.com/tag/bias/ Artificial Intelligence News Thu, 22 Feb 2024 15:11:13 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png bias Archives - AI News https://www.artificialintelligence-news.com/tag/bias/ 32 32 Google pledges to fix Gemini’s inaccurate and biased image generation https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/ https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/#respond Thu, 22 Feb 2024 15:11:11 +0000 https://www.artificialintelligence-news.com/?p=14437 Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems. The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios. Google Gemini Image generation model... Read more »

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.

The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.

Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.

In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:

For now, Google says it is pausing the image generation of people:

While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.

Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.

Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.

Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:

Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:

As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.

(Photo by Matt Artz on Unsplash)

See also: Reddit is reportedly selling data for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/feed/ 0
ChatGPT’s political bias highlighted in study https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/ https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/#respond Fri, 18 Aug 2023 09:47:26 +0000 https://www.artificialintelligence-news.com/?p=13496 A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT. The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum. Published in the journal Public Choice this week, the study – conducted... Read more »

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in AI-generated content could perpetuate existing biases found in traditional media.

The research highlights the potential impact of such bias on various stakeholders, including policymakers, media outlets, political groups, and educational institutions.

Utilising an empirical approach, the researchers employed a series of questionnaires to gauge ChatGPT’s political orientation. The chatbot was asked to answer political compass questions, capturing its stance on various political issues.

Furthermore, the study examined scenarios where ChatGPT impersonated both an average Democrat and a Republican, revealing the algorithm’s inherent bias towards Democratic-leaning responses.

The study’s findings indicate that ChatGPT’s bias extends beyond the US and is also noticeable in its responses regarding Brazilian and British political contexts. Notably, the research even suggests that this bias is not merely a mechanical result but a deliberate tendency in the algorithm’s output.

Determining the exact source of ChatGPT’s political bias remains a challenge. The researchers investigated both the training data and the algorithm itself, concluding that both factors likely contribute to the bias. They highlighted the need for future research to delve into disentangling these components for a clearer understanding of the bias’s origins.

OpenAI, the organisation behind ChatGPT, has not yet responded to the study’s findings. This study joins a growing list of concerns surrounding AI technology, including issues related to privacy, education, and identity verification in various sectors.

As the influence of AI-driven tools like ChatGPT continues to expand, experts and stakeholders are grappling with the implications of biased AI-generated content.

This latest study serves as a reminder that vigilance and critical evaluation are necessary to ensure that AI technologies are developed and deployed in a fair and balanced manner, devoid of undue political influence.

(Photo by Priscilla Du Preez on Unsplash)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/feed/ 0
Study highlights impact of demographics on AI training https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/ https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/#respond Thu, 17 Aug 2023 12:39:29 +0000 https://www.artificialintelligence-news.com/?p=13491 A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models. The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within... Read more »

The post Study highlights impact of demographics on AI training appeared first on AI News.

]]>
A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models.

The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within AI systems.

“Systems like ChatGPT are increasingly used by people for everyday tasks,” explains assistant professor David Jurgens from the University of Michigan School of Information. 

“But on whose values are we instilling in the trained model? If we keep taking a representative sample without accounting for differences, we continue marginalising certain groups of people.” 

Machine learning and AI systems increasingly rely on human annotation to train their models effectively. This process, often referred to as ‘Human-in-the-loop’ or Reinforcement Learning from Human Feedback (RLHF), involves individuals reviewing and categorising language model outputs to refine their performance.

One of the most striking findings of the study is the influence of demographics on labelling offensiveness.

The research found that different racial groups had varying perceptions of offensiveness in online comments. For instance, Black participants tended to rate comments as more offensive compared to other racial groups. Age also played a role, as participants aged 60 or over were more likely to label comments as offensive than younger participants.

The study involved analysing 45,000 annotations from 1,484 annotators and covered a wide array of tasks, including offensiveness detection, question answering, and politeness. It revealed that demographic factors continue to impact even objective tasks like question answering. Notably, accuracy in answering questions was affected by factors like race and age, reflecting disparities in education and opportunities.

Politeness, a significant factor in interpersonal communication, was also impacted by demographics.

Women tended to judge messages as less polite than men, while older participants were more likely to assign higher politeness ratings. Additionally, participants with higher education levels often assigned lower politeness ratings and differences were observed between racial groups and Asian participants.

Phelim Bradley, CEO and co-founder of Prolific, said:

“Artificial intelligence will touch all aspects of society and there is a real danger that existing biases will get baked into these systems.

This research is very clear: who annotates your data matters.

Anyone who is building and training AI systems must make sure that the people they use are nationally representative across age, gender, and race or bias will simply breed more bias.”

As AI systems become more integrated into everyday tasks, the research underscores the imperative of addressing biases at the early stages of model development to avoid exacerbating existing biases and toxicity.

You can find a full copy of the paper here (PDF)

(Photo by Clay Banks on Unsplash)

See also: Error-prone facial recognition leads to another wrongful arrest

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study highlights impact of demographics on AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/feed/ 0
AI in the justice system threatens human rights and civil liberties https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/ https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/#respond Wed, 30 Mar 2022 16:30:18 +0000 https://artificialintelligence-news.com/?p=11820 The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties. A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to... Read more »

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties.

A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to be a focus on rushing the technology into production with little concern about its potential negative impact.

Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said:

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.

Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision-makers, knowing how to question the tools they are using and how to challenge their outcome.”

The concept of XAI (Explainable AI) is growing traction and would help to address the problem of humans not always understanding how an AI has come to make a specific recommendation. 

Having fully-informed humans make the final decisions would go a long way toward building trust in the technology—ensuring clear accountability and minimising errors.

“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” says Baroness Hamwee.

“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities, and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.”

While there must be clear accountability for decision-makers in the justice system; the report also says governance needs reform.

The report notes there are more than 30 public bodies, initiatives, and programmes that play a role in the governance of new technologies in the application of the law. Without reform, where responsibility lies will be difficult to identify due to unclear roles and overlapping functions.

Societal discrimination also risks being exacerbated through bias in data being embedded in algorithms used for increasingly critical decisions from who to offer a loan to, all the way to who to arrest and potentially even put in prison.

Across the pond, Democrats reintroduced their Algorithmic Accountability Act last month which seeks to hold tech firms accountable for bias in their algorithms.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” said Senator Ron Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

Biased AI-powered facial recognition systems have already led to wrongful arrests of people from marginalised communities. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, last year following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

Last year, UK Health Secretary Sajid Javid greenlit a series of AI-based projects aiming to tackle racial inequalities in the healthcare system. Among the greenlit projects is the creation of new standards for health inclusivity to improve the representation of ethnic minorities in datasets used by the NHS.

“If we only train our AI using mostly data from white patients it cannot help our population as a whole,” said Javid. “We need to make sure the data we collect is representative of our nation.”

Stiffer penalties for AI misuse, a greater push for XAI, governance reform, and improving diversity in datasets all seem like great places to start to prevent AI from undermining human rights and civil liberties.

(Photo by Tingey Injury Law Firm on Unsplash)

Related: UN calls for ‘urgent’ action over AI’s risk to human rights

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory... Read more »

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/feed/ 0
AI bias harms over a third of businesses, 81% want more regulation https://www.artificialintelligence-news.com/2022/01/20/ai-bias-harms-over-a-third-of-businesses-81-want-more-regulation/ https://www.artificialintelligence-news.com/2022/01/20/ai-bias-harms-over-a-third-of-businesses-81-want-more-regulation/#respond Thu, 20 Jan 2022 10:34:20 +0000 https://artificialintelligence-news.com/?p=11594 AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem. The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries. Kay Firth-Butterfield, Head of... Read more »

The post AI bias harms over a third of businesses, 81% want more regulation appeared first on AI News.

]]>
AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem.

The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries.

Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum, said: 

“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long.

The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”

Just over half (54%) of respondents have “deep concerns” around the risk of AI bias while a much higher percentage (81%) want more government regulation to prevent.

Given the still relatively small adoption of AI at this stage across most organisations; there’s a concerning number reporting harm from bias.

Over a third (36%) of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. This includes:

  • Lost revenue (62%)
  • Lost customers (61%)
  • Lost employees (43%)
  • Incurred legal fees due to a lawsuit or legal action (35%)
  • Damaged brand reputation/media backlash (6%)

Ted Kwartler, VP of Trusted AI at DataRobot, commented:

“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place.

Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted, and explainable.”

Four key challenges were identified as to why organisations are struggling to counter bias:

  1. Understanding why an AI was led to make a specific decision
  2. Comprehending patterns between input values and AI decisions
  3. Developing trustworthy algorithms
  4. Determinng what data is used to train AI

Fortunately, a growing number of solutions are becoming available to help counter/reduce AI bias as the industry matures.

“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his Predictions 2022: Artificial Intelligence (paywall) report.

“Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI bias harms over a third of businesses, 81% want more regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/20/ai-bias-harms-over-a-third-of-businesses-81-want-more-regulation/feed/ 0
Editorial: Our predictions for the AI industry in 2022 https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/ https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/#respond Thu, 23 Dec 2021 11:59:08 +0000 https://artificialintelligence-news.com/?p=11547 The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits. As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022. Tackling bias Our... Read more »

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s not trusted.

Biases are present in algorithms that are already causing harm. They’ve been the subject of many headlines, including a number of ours, and must be addressed for the public to have confidence in wider adoption.

Explainable AI (XAI) is a partial solution to the problem. XAI is artificial intelligence in which the results of the solution can be understood by humans.

Robert Penman, Associate Analyst at GlobalData, comments:

“2022 will see the further rollout of XAI, enabling companies to identify potential discrimination in their systems’ algorithms. It is essential that companies correct their models to mitigate bias in data. Organisations that drag their feet will face increasing scrutiny as AI continues to permeate our society, and people demand greater transparency. For example, in the Netherlands, the government’s use of AI to identify welfare fraud was found to violate European human rights.

Reducing human bias present in training datasets is a huge challenge in XAI implementation. Even tech giant Amazon had to scrap its in-development hiring tool because it was claimed to be biased against women.

Further, companies will be desperate to improve their XAI capabilities—the potential to avoid a PR disaster is reason enough.”

To that end, expect a large number of acquisitions of startups specialising in synthetic data training in 2022.

Smoother integration

Many companies don’t know how to get started on their AI journeys. Around 30 percent of enterprises plan to incorporate AI into their company within the next few years, but 91 percent foresee significant barriers and roadblocks.

If the confusion and anxiety that surrounds AI can be tackled, it will lead to much greater adoption.

Dr Max Versace, PhD, CEO and Co-Founder of Neurala, explains:

“Similar to what happened with the introduction of WordPress for websites in early 2000, platforms that resemble a ‘WordPress for AI’ will simplify building and maintaining AI models. 

In manufacturing for example, AI platforms will provide integration hooks, hardware flexibility, ease of use by non-experts, the ability to work with little data, and, crucially, a low-cost entry point to make this technology viable for a broad set of customers.”

AutoML platforms will thrive in 2022 and beyond.

From the cloud to the edge

The migration of AI from the cloud to the edge will accelerate in 2022.

Edge processing has a plethora of benefits over relying on cloud servers including speed, reliability, privacy, and lower costs.

Versace commented:

“Increasingly, companies are realising that the way to build a truly efficient AI algorithm is to train it on their own unique data, which might vary substantially over time. To do that effectively, the intelligence needs to directly interface with the sensors producing the data. 

From there, AI should run at a compute edge, and interface with cloud infrastructure only occasionally for backups and/or increased functionality. No critical process – for example,  in a manufacturing plant – should exclusively rely on cloud AI, exposing the manufacturing floor to connectivity/latency issues that could disrupt production.”

Expect more companies to realise the benefits of migrating from cloud to edge AI in 2022.

Doing more with less

Among the early concerns about the AI industry is that it would be dominated by “big tech” due to the gargantuan amount of data they’ve collected.

However, innovative methods are now allowing algorithms to be trained with less information. Training using smaller but more unique datasets for each deployment could prove to be more effective.

We predict more startups will prove the world doesn’t have to rely on big tech in 2022.

Human-powered AI

While XAI systems will provide results which can be understood by humans, the decisions made by AIs will be more useful because they’ll be human-powered.

Varun Ganapathi, PhD, Co-Founder and CTO at AKASA, said:

“For AI to truly be useful and effective, a human has to be present to help push the work to the finish line. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. This is a trend that will only continue to increase.

Ultimately, people will have machines report to them. In this world, humans will be the managers of staff – both other humans and AIs – that will need to be taught and trained to be able to do the tasks they’re needed to do.

Just like people, AI needs to constantly be learning to improve performance.”

Greater human input also helps to build wider trust in AI. Involving humans helps to counter narratives about AI replacing jobs and concerns that decisions about people’s lives could be made without human qualities such as empathy and compassion.

Expect human input to lead to more useful AI decisions in 2022.

Avoiding captivity

The telecoms industry is currently pursuing an innovation called Open RAN which aims to help operators avoid being locked to specific vendors and help smaller competitors disrupt the relative monopoly held by a small number companies.

Enterprises are looking to avoid being held in captivity by any AI vendor.

Doug Gilbert, CIO and Chief Digital Officer at Sutherland, explains:

“Early adopters of rudimentary enterprise AI embedded in ERP / CRM platforms are starting to feel trapped. In 2022, we’ll see organisations take steps to avoid AI lock-in. And for good reason. AI is extraordinarily complex.

When embedded in, say, an ERP system, control, transparency, and innovation is handed over to the vendor not the enterprise. AI shouldn’t be treated as a product or feature: it’s a set of capabilities. AI is also evolving rapidly, with new AI capabilities and continuously improved methods of training algorithms.

To get the most powerful results from AI, more enterprises will move toward a model of combining different AI capabilities to solve unique problems or achieve an outcome. That means they’ll be looking to spin up more advanced and customizable options and either deprioritising AI features in their enterprise platforms or winding down those expensive but basic AI features altogether.”

In 2022 and beyond, we predict enterprises will favour AI solutions that avoid lock-in.

Chatbots get smart

Hands up if you’ve ever screamed (internally or externally) that you just want to speak to a human when dealing with a chatbot—I certainly have, more often than I’d care to admit.

“Today’s chatbots have proven beneficial but have very limited capabilities. Natural language processing will start to be overtaken by neural voice software that provides near real time natural language understanding (NLU),” commented Gilbert.

“With the ability to achieve comprehensive understanding of more complex sentence structures, even emotional states, break down conversations into meaningful content, quickly perform keyword detection and named entity recognition, NLU will dramatically improve the accuracy and the experience of conversational AI.”

In theory, this will have two results:

  • Augmenting human assistance in real-time, such as suggesting responses based on behaviour or based on skill level.
  • Change how a customer or client perceives they’re being treated with NLU delivering a more natural and positive experience.  

In 2022, chatbots will get much closer to offering a human-like experience.

It’s not about size, it’s about the quality

A robust AI system requires two things: a functioning model and underlying data to train that model. Collecting huge amounts of data is a waste of time if it’s not of high quality and labeled correctly.

Gabriel Straub, Chief Data Scientist at Ocado Technology, said:

“Andrew Ng has been speaking about data-centric AI, about how improving the quality of your data can often lead to better outcomes than improving your algorithms (at least for the same amount of effort.)

So, how do you do this in practice? How do you make sure that you manage the quality of data at least as carefully as the quantity of data you collect?

There are two things that will make a big difference: 1) making sure that data consumers are always at the heart of your data thinking and 2) ensuring that data governance is a function that enables you to unlock the value in your data, safely, rather than one that focuses on locking down data.”

Expect the AI industry to make the quality of data a priority in 2022.

(Photo by Michael Dziedzic on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/feed/ 0
UK health secretary hopes AI projects can tackle racial inequality https://www.artificialintelligence-news.com/2021/10/20/uk-health-secretary-hopes-ai-projects-can-tackle-racial-inequality/ https://www.artificialintelligence-news.com/2021/10/20/uk-health-secretary-hopes-ai-projects-can-tackle-racial-inequality/#respond Wed, 20 Oct 2021 12:41:14 +0000 http://artificialintelligence-news.com/?p=11254 UK Health Secretary Sajid Javid has greenlit a series of AI-based projects that aim to tackle racial inequalities in the NHS. Racial inequality continues to be rampant in healthcare. Examining the fallout of COVID-19 serves as yet another example of the disparity between ethnicities. In England and Wales, males of Black African ethnic background had... Read more »

The post UK health secretary hopes AI projects can tackle racial inequality appeared first on AI News.

]]>
UK Health Secretary Sajid Javid has greenlit a series of AI-based projects that aim to tackle racial inequalities in the NHS.

Racial inequality continues to be rampant in healthcare. Examining the fallout of COVID-19 serves as yet another example of the disparity between ethnicities.

In England and Wales, males of Black African ethnic background had the highest rate of death involving COVID-19, 2.7 times higher than males of a White ethnic background. Females of Black Caribbean ethnic background had the highest rate, 2.0 times higher than females of White ethnic background. All ethnic minority groups other than Chinese had a higher rate than the White ethnic population for both males and females.

Such disparities are sadly common across many conditions that can reduce life enjoyment, limit opportunities, and even lead to premature death. AI could be a powerful aid in tackling the problem, if thoroughly tested and implemented responsibly.

“As the first health and social care secretary from an ethnic minority background, I care deeply about tackling the disparities which exist within the healthcare system,” explained Javid, speaking to The Guardian.

Among the projects given the green light by Javid include the creation of new standards for health inclusivity to improve the representation of ethnic minorities in datasets used by the NHS.

“If we only train our AI using mostly data from white patients it cannot help our population as a whole,” added Javid. “We need to make sure the data we collect is representative of our nation.”

A recent analysis found a significant disparity in performance when using computer screening to detect diabetic retinopathy in patients from ethnic minority communities due to different levels of retinal pigmentation. One project will attempt to address this disparity.

Among the devastating statistics affecting minority communities is that black women are five times more likely to die from complications during pregnancy than white women. One project will use algorithms to investigate the factors and recommend changes – including potentially new training for nurses and midwives – that will hopefully ensure that everyone has the best possible chance to live a healthy life with their child.

The development of an AI-powered chatbot also hopes to raise the uptake of screening for STIs/HIV among minority ethnic communities.

The drive will be led by NHSX. A report in 2017 by PwC found that just 39 percent of the UK public would be willing to engage with AI for healthcare. However, research (PDF) by KPMG found that – despite an overall unwillingness from the British public to share their data with the country’s biggest organisations even if it improved service – the NHS came out on top with 56 percent willing to do so.

If the UK Government wants to use AI as part of its “level up” plans, it will need to tread carefully with a sceptical public and prove its benefits while avoiding the kind of devastating missteps that have cost thousands of lives and defined Johnson’s premiership so far.

(Image Credit: UK Parliament under CC BY 3.0 license)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post UK health secretary hopes AI projects can tackle racial inequality appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/20/uk-health-secretary-hopes-ai-projects-can-tackle-racial-inequality/feed/ 0
Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias https://www.artificialintelligence-news.com/2021/10/12/nvidia-and-microsoft-develop-530-billion-parameter-ai-model-but-it-still-suffers-from-bias/ https://www.artificialintelligence-news.com/2021/10/12/nvidia-and-microsoft-develop-530-billion-parameter-ai-model-but-it-still-suffers-from-bias/#respond Tue, 12 Oct 2021 11:49:11 +0000 http://artificialintelligence-news.com/?p=11218 Nvidia and Microsoft have developed an incredible 530 billion parameter AI model, but it still suffers from bias. The pair claim their Megatron-Turing Natural Language Generation (MT-NLG) model is the “most powerful monolithic transformer language model trained to date”. For comparison, OpenAI’s much-lauded GPT-3 has 175 billion parameters. The duo trained their impressive model on... Read more »

The post Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias appeared first on AI News.

]]>
Nvidia and Microsoft have developed an incredible 530 billion parameter AI model, but it still suffers from bias.

The pair claim their Megatron-Turing Natural Language Generation (MT-NLG) model is the “most powerful monolithic transformer language model trained to date”.

For comparison, OpenAI’s much-lauded GPT-3 has 175 billion parameters.

The duo trained their impressive model on 15 datasets with a total of 339 billion tokens. Various sampling weights were given to each dataset to emphasise those of a higher-quality.

The OpenWebText2 dataset – consisting of 14.8 billion tokens – was given the highest sampling weight of 19.3 percent. This was followed by CC-2021-04 – consisting of 82.6 billion tokens, the largest amount of all the datasets – with a weight of 15.7 percent. Rounding out the top three is Books 3 – a dataset with 25.7 billion tokens – that was given a weight of 14.3 percent.

However, despite the large increase in parameters, MT-NLG suffered from the same issues as its predecessors.

“While giant language models are advancing the state of the art on language generation, they also suffer from issues such as bias and toxicity,” the companies explained.

“Our observations with MT-NLG are that the model picks up stereotypes and biases from the data on which it is trained.”

Nvidia and Microsoft say they remain committed to addressing this problem.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/12/nvidia-and-microsoft-develop-530-billion-parameter-ai-model-but-it-still-suffers-from-bias/feed/ 0
F-Secure: AI-based recommendation engines are easy to manipulate https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/ https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/#respond Thu, 24 Jun 2021 11:10:26 +0000 http://artificialintelligence-news.com/?p=10716 Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate. Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more. Matti Aksela, VP of... Read more »

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate.

Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more.

Matti Aksela, VP of Artificial Intelligence at F-Secure, commented:

“As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse. 

Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results.

Secure AI is the foundation of trustworthy AI.”

Sophisticated disinformation efforts – such as those organised by Russia’s infamous “troll farms” – have spread dangerous lies around COVID-19 vaccines, immigration, and high-profile figures.

Andy Patel, Researcher at F-Secure’s Artificial Intelligence Center of Excellence, said:

“Twitter and other networks have become battlefields where different people and groups push different narratives. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information.

Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.” 

Legitimate and reliable information is needed more than ever. Scepticism is healthy, but people are beginning to either trust nothing or believe everything. Both are problematic.

According to a PEW Research Center survey from late-2020, 53 percent of Americans get their news from social media. Younger respondents, aged between 18-29, reported that social media is their main source of news.

No person or media outlet gets everything right, but a history of credibility must be taken into account—which tools such as NewsGuard help with. However, almost all mainstream media outlets have at least more credibility than a random social media user who may or may not even be who they claim to be.

In 2018, an investigation found that Twitter posts containing falsehoods are 70 percent more likely to be reshared. The ripple effect created by this resharing without fact-checking is why disinformation can spread so far within minutes. For some topics, like COVID-19 vaccines, Facebook has at least started to prompt users whether they’ve considered if the information is accurate before they share it.

Patel trained collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) using data collected from Twitter for use in recommendation systems. As part of his experiments, Patel “poisoned” the data using additional retweets to retrain the model and see how the recommendations changed.

The findings showed how even a very small number of retweets could manipulate the recommendation engine into promoting accounts whose content was shared through the injected retweets.

“We performed tests against simplified models to learn more about how the real attacks might actually work,” said Patel.

“I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organisations to be certain this is what’s happening because they’ll only see the result, not how it works.”

Patel’s research can be recreated using the code and datasets hosted on GitHub here.

(Photo by Charles Deluvio on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/feed/ 0