AI bias harms over a third of businesses, 81% want more regulation

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem.

The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries.

Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum, said: 

“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long.

The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”

Just over half (54%) of respondents have “deep concerns” around the risk of AI bias while a much higher percentage (81%) want more government regulation to prevent.

Given the still relatively small adoption of AI at this stage across most organisations; there’s a concerning number reporting harm from bias.

Over a third (36%) of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. This includes:

  • Lost revenue (62%)
  • Lost customers (61%)
  • Lost employees (43%)
  • Incurred legal fees due to a lawsuit or legal action (35%)
  • Damaged brand reputation/media backlash (6%)

Ted Kwartler, VP of Trusted AI at DataRobot, commented:

“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place.

Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted, and explainable.”

Four key challenges were identified as to why organisations are struggling to counter bias:

  1. Understanding why an AI was led to make a specific decision
  2. Comprehending patterns between input values and AI decisions
  3. Developing trustworthy algorithms
  4. Determinng what data is used to train AI

Fortunately, a growing number of solutions are becoming available to help counter/reduce AI bias as the industry matures.

“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his Predictions 2022: Artificial Intelligence (paywall) report.

“Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply