Twitter issued its response to last month’s controversy over how its algorithm crops photos.
A tweet by Iqlusion co-founder Tony Arcieri showing how Twitter’s algorithm repeatedly chose to display the face of Sen. Mitch McConnell (R-Ky.) instead of that of former President Barack Obama when cropping a photo containing headshots of the two men spurred a host of copycat examples that were shared throughout social platforms last month.
Twitter chief technology officer Parag Agrawal and chief design officer Dantley Davis said in a blog post Thursday that the social network’s analyses of the machine learning system that decides how images should be cropped before being displayed on Twitter have not uncovered any racial or gender bias.
They explained, “The image cropping system relies on saliency, which predicts where people might look first. For our initial bias analysis, we tested pairwise preference between two demographic groups (white-Black, white-Indian, white-Asian and male-female). In each trial, we combined two faces into the same image, with their order randomized, then computed the saliency map over the combined image. Then, we located the maximum of the saliency map, and recorded which demographic category it landed on. We repeated this 200 times for each pair of demographic categories and evaluated the frequency of preferring one over the other.”
Agrawal and Davis said Twitter is exploring ways to decease its reliance on machine learning and give users more visibility and control over what their mages will look like in tweets, adding that the social network is committed to following the “what you see is what you get” principles of design, meaning that the photo people see in the tweet composer is what it will look like in the tweet.
They cautioned that there will be exceptions, such as photos that aren’t standard sizes, or very long or wide, and the platform will continue to experiment with how to present those images while not losing the creator’s focal point or compromising the integrity of the photo.
Agrawal and Davis concluded, “Bias in ML systems is an industrywide issue, and one we’re committed to improving on Twitter. We’re aware of our responsibility and want to work toward making it easier for everyone to understand how our systems work. While no system can be completely free of bias, we’ll continue to minimize bias through deliberate and thorough analysis, and share updates as we progress in this space.”