Twitter turns to HackerOne community to help fix its AI biases

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


Twitter is recruiting the help of the HackerOne community to try and fix troubling biases with its AI models.

The image-cropping algorithm used by Twitter was intended to keep the most interesting parts of an image in the preview crop in people’s timelines. That’s all good, until users found last year that it favoured lighter skin colours over dark and the breasts and legs of women over their faces.

When researchers fed a picture of a black man and a white woman into the system, the algorithm displayed the white woman 64 percent of the time and the black man just 36 percent of the time. For images of a white woman and a black woman, the algorithm displayed the white woman 57 percent of the time.

Twitter has offered bounties ranging between $500 and $3500 to anyone who finds evidence of harmful bias in their algorithms. Anyone successful will also be invited to DEF CON, a major hacker convention.

Rumman Chowdhury, Director of Software Engineering at Twitter, and Jutta Williams, Product Manager, wrote in a blog post:

“We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves.”

After initially denying the problem, it’s good to see Twitter taking responsibility and attempting to fix the issue. By doing so, the company says it wants to “set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.”

Three staffers from Twitter’s Machine Learning Ethics, Transparency, and Accountability department found biases in their own tests and claim the algorithm is, on average, around four percent more likely to display people with lighter skin compared to darker and eight percent more likely to display women compared to men.

However, the staffers found no evidence that certain parts of people’s bodies were more likely to be displayed than others.

“We found that no more than 3 out of 100 images per gender have the crop not on the head,” they explained in a paper that was published on arXiv.

Twitter has gradually ditched its problematic image-cropping algorithm and doesn’t seem to be in a rush to reinstate it anytime soon:

In its place, Twitter has been rolling out the ability for users to control how their images are cropped.

“We considered the trade-offs between the speed and consistency of automated cropping with the potential risks we saw in this research,” wrote Chowdhury in a blog post in May.

“One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.”

The HackerOne page for the challenge can be found here.

(Photo by Edgar MORAN on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply