diversity Archives - AI News https://www.artificialintelligence-news.com/tag/diversity/ Artificial Intelligence News Thu, 22 Feb 2024 15:11:13 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png diversity Archives - AI News https://www.artificialintelligence-news.com/tag/diversity/ 32 32 Google pledges to fix Gemini’s inaccurate and biased image generation https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/ https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/#respond Thu, 22 Feb 2024 15:11:11 +0000 https://www.artificialintelligence-news.com/?p=14437 Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems. The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios. Google Gemini Image generation model... Read more »

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.

The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.

Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.

In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:

For now, Google says it is pausing the image generation of people:

While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.

Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.

Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.

Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:

Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:

As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.

(Photo by Matt Artz on Unsplash)

See also: Reddit is reportedly selling data for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/22/google-pledges-fix-gemini-inaccurate-biased-image-generation/feed/ 0
Lack of STEM diversity is causing AI to have a ‘white male’ bias https://www.artificialintelligence-news.com/2019/04/18/stem-diversity-ai-white-male-bias/ https://www.artificialintelligence-news.com/2019/04/18/stem-diversity-ai-white-male-bias/#comments Thu, 18 Apr 2019 15:34:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5568 A report from New York University’s AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms. The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males.... Read more »

The post Lack of STEM diversity is causing AI to have a ‘white male’ bias appeared first on AI News.

]]>
A report from New York University’s AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms.

The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males.

For example, at Facebook just 15 percent of the company’s AI staff are women. The problem is even more substantial at Google where just 10 percent are female.

Report authors Sarah Myers West, Meredith Whittaker and Kate Crawford wrote:

“To date, the diversity problems of the AI industry and the issues of bias in the systems it builds have tended to be considered separately.

We suggest that these are two versions of the same problem: issues of discrimination in the workforce and in system building are deeply intertwined.”

As artificial intelligence becomes used more across society, there’s a danger of some groups being left behind from its advantages while “reinforcing a narrow idea of the ‘normal’ person”.

The researchers highlight examples of where this is already happening:

  • Amazon’s controversial Rekognition facial recognition AI struggled with dark-skin females in particular, although separate analysis has found other AIs also face such difficulties with non-white males.
  • A résumé-scanning AI which relied on previous examples of successful applicants as a benchmark. The AI downgraded people who included “women’s” in their résumé or who attended women’s colleges.

AI is currently being deployed in few life-changing areas, but that’s rapidly changing. Law enforcement is already looking to use the technology for identifying criminals, even preemptively in some cases, and for making sentencing decisions – including whether someone should be granted bail.

“The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation,” the researchers noted. “The commercial deployment of these tools is cause for deep concern.”

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Lack of STEM diversity is causing AI to have a ‘white male’ bias appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/04/18/stem-diversity-ai-white-male-bias/feed/ 1
Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity https://www.artificialintelligence-news.com/2019/03/22/stanford-institute-ai-humanity-diversity/ https://www.artificialintelligence-news.com/2019/03/22/stanford-institute-ai-humanity-diversity/#respond Fri, 22 Mar 2019 12:09:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5375 An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity. The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation. Cybersecurity expert Chad Loder... Read more »

The post Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity appeared first on AI News.

]]>
An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity.

The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation.

Cybersecurity expert Chad Loder noticed that not a single member of Stanford’s new AI faculty was black. Tech site Gizmodo reached out to Stanford and the university quickly added Juliana Bidadanure, an assistant professor of philosophy.

Part of the institute’s problem could be the very thing it’s attempting to address – that, while improving, there’s still a lack of diversity in STEM-based careers. With revolutionary technologies such as AI, parts of society are in danger of being left behind.

The institute has backing from some big-hitters. People like Bill Gates and Gavin Newsom have pledged their support that “creators and designers of AI must be broadly representative of humanity.”

Fighting Algorithmic Bias

Stanford isn’t the only institution fighting the good fight against bias in algorithms.

Earlier this week, AI News reported on the UK government’s launch of an investigation to determine the levels of bias in algorithms that could affect people’s lives.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.

Meanwhile, activists like Joy Buolamwini from the Algorithmic Justice League are doing their part to raise awareness of the dangers which bias in AI poses.

In a speech earlier this year, Buolamwini analysed current popular facial recognition algorithms and found serious disparities in accuracy – particularly when recognising black females.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be mistakenly stopped more often. We’re in serious danger of automating profiling.

Some efforts are being made to create AIs which detect unintentional bias in other algorithms – but it’s early days for such developments, and they will also need diverse creators.

However it’s tackled, algorithmic bias needs to be eliminated before it’s adopted in areas of society where it will have a negative impact on individuals.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/03/22/stanford-institute-ai-humanity-diversity/feed/ 0