Looking into the impact of bias in AI-generated headshots
It seems like everywhere you turn there’s another article about AI and its future in business. Half of them are singing the praises of the latest AI-driven tool to make an appearance, the other half are sounding the alarm about how this is signaling the end of jobs and the downfall of society.
The credit union industry is already familiar with the complexity behind the use of AI tools for lending, with federal agencies looking into claims of digital redlining on one hand, and tools arriving specifically to assist lenders in finding less discriminatory alternative (LDA) underwriting models on the other.
Lately, one of the more popular, and seemingly low-stakes, subsets of this AI chatter is the AI headshot. You may have seen some these on your LinkedIn feed: someone posts about how they decided to give it a try, laughs about the versions they get back that have too many fingers or misaligned eyes, but in the end they show off several decent looking photos of themselves. All for around $20-30.
It seems like a fun, low-cost and convenient way to get employee photos, especially for an organization like ours with a mostly remote team that’s located all over the country. We thought we’d give it a try recently with one of our new employees, Mark Volz. He gamely submitted ten selfies and awaited the results.
“When I saw the photos, none of them looked like me,” Mark explained. “They were all Asian men with facial hair, but they weren’t me. They didn’t even particularly look Filipino.” It’s also worth noting that the AI-generated images showed someone with lighter skin than in most of the photos Mark provided.
Imagine you’ve tried out this service for your organization – you pick a few people to be test subjects, and hey, they look pretty good! So, you roll it out for all staff. Only to have some of those employees get the same types of results that Mark did. They may feel like their identity has effectively been erased. And beyond that, they might not feel comfortable pointing that out because they know they only have so many opportunities to voice those kinds of objections without seeming like a complainer.
Why is this happening? Generative AI, of course, can only produce output based on what it’s been trained on. We don’t know what pools of images the creators of tools like Studioshot or TryitonAI used, but it certainly seems like, just like in much of the media, there’s an underrepresentation of certain groups going on. It’s a perfect example of inherent bias. There’s not a conscious effort to exclude any groups, but there’s also no effort to ensure that all groups are included. So existing biases are reinforced.
“Growing up in Wisconsin, there weren’t a lot of other Filipinos around, so in a way I’m used to not seeing myself reflected in the culture around me,” said Mark. “But to see that invisibility reinforced in brand new technology is really disappointing.”
Images aren’t the only area where these more subtle elements of bias can crop up. It’s only been in recent years that the definition of what constitutes “professional” communication has started to slowly expand, allowing people of color to bring more of their authentic selves to the table instead of having to code-switch to try to fit in with the company culture. But when tools like ChatGPT are used to support copywriting, it’s easy to fall back on the homogenized, ‘whitewashed’ language. Or as ChatGPT itself puts it, “Recognizing the limitations and biases inherent in AI-generated copy is vital for organizations to actively address and mitigate them.” Not exactly the way I would have put it.
I’m a lover of new technology, and a heavy advocate for it. In fact, I have prioritized innovation in organizations I’ve led. However, it is important that as we evolve and move forward with technology, we do our best to leave no one behind. And that means we must do all we can to reduce bias in AI. Or else, we’ll be creating a new problem while solving another, and that hinders progression for all.