BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

ChatGPT Threatens Authenticity Of DEI Communications From Leaders

Following

ChatGPT could make leaders sound like they really care about diversity, equity, and inclusion — even if they really don’t. It’ll be especially troubling when the messages being communicated are on DEI topics about which leaders know very little and aren’t sufficiently invested. This poses one of the biggest threats to the integrity of DEI work in organizations. It’s also one of the most significant risks to leaders’ relationships with diverse employees whom AI-generated words deceptively affirm.

DEI communications from many C-Suites are already plagued with authenticity problems. Statements from the CEO about the tragic police shootings of unarmed Black Americans, as well as the seemingly obligatory history/heritage month emails to celebrate LGBTQIA+ Pride and various ethnic groups each year, are typically written by someone else. Employees can sometimes detect the artificiality. Ghostwriting therefore undermines executive integrity on DEI. ChatGPT will exacerbate this.

Women employees would surely be disappointed to learn that ChatGPT wrote the email they received from their CEO stressing the importance of ending sexism and advancing more of them to leadership roles within the company. Many would feel lied to. This would weaken the CEO’s credibility with women and other colleagues who are seriously committed to women’s workplace success. Future executive-level communications about gender equity and other DEI topics would likely be read through the prisms of skepticism and doubt — or altogether ignored.

ChatGPT could be seemingly useful to leaders who don’t quite know what to say to their employees about specific racial, social justice, and DEI issues. I instructed this publicly-available AI resource to write a statement from a CEO to employees about the recent fatal beating of Tyre Nichols by police officers in Memphis. Here’s what it says:

Dear Employees,
It is with deep sorrow and regret that I must inform you of the senseless beating of Tyre Nichols on Tuesday night. This is an outrage and a tragedy, and we all send our thoughts and prayers to Tyre and his family.
This incident is a heartbreaking reminder of the racism, discrimination, and violence that still exists in our world today. We must all stand together in solidarity and not allow this hatred to take over our society. We must take action and fight against injustice, violence, and bigotry in any way we can.
At this time, our main focus is ensuring that Tyre and his family have all the support they need to get through this difficult time. We are doing everything we can to assist in their healing process.
We cannot and will not stand for any sort of hatred or violence in our community. We are all in this together and must work together to create a better and safer world for everyone.
Sincerely,
[Your Name]
CEO

Even though it doesn’t include all the components I recommended to executives in a Forbes article published just moments before the release of video footage showing Memphis cops attacking Tyre Nichols, I must admit that the ChatGPT-crafted CEO message sounds quite authentic. That’s the problem. But it isn’t entirely a new problem.

Words being communicated by executives that executives themselves didn’t write isn’t new. American presidents rely heavily on speech writers. Many listeners are aware that Mr. President didn’t write the words he’s reading off the teleprompter (by the way, the gendering here is intentional because all 46 U.S. presidents have been men). For better or worse, it was usually clear when President Donald Trump wasn’t reading words that had been written on his behalf.

In companies and universities, communications officers often draft or fully construct statements, emails, and speeches for presidents who understandably don’t have enough capacity to write everything themselves. This explains how and why those messages usually sound so empty, sanitized, and corporatized. It also helps partially explain why executive leaders’ DEI-related actions often fail to match the words expressed in messages sent under their signatures.

As I explained in my Washington Post article following the murder of George Floyd in summer 2020, many Black professionals doubted the seriousness of messages they and co-workers were receiving from their executive leaders addressing the tragedy and declaring the value of Black lives. In many companies, Black employees could tell that the comms team, not the CEO, wrote those words. It disappointed some and infuriated others.

Here’s one longstanding truth of which many executives are probably unaware: a lot of Latino professionals in an audience can tell when the leader’s Hispanic Heritage Month event remarks were written by someone else. The words sound hollow and it’s painfully apparent that those leaders haven’t spent much time immersing themselves in Latino culture or talking with the company’s Latinx employees. There’s a chance that a ChatGPT speech would sound better — especially if it’s instructed to make the second and third drafts sound more personal and compassionate. But is this the right thing to do? No.

Because ChatGPT is so new, I and other DEI researchers haven’t yet had an opportunity to study how women, employees of color, and colleagues who are queer, Muslim, Jewish, or otherwise diverse would feel about receiving a caring-sounding email focused on their communities from a leader who used ChatGPT to write it. I comfortably predict that the overwhelming majority of them would deem it improper, inexcusably dishonest, and in some instances, typical.

Some K-12 school districts – including the New York City Department of Education, our nation’s largest public school system – have already banned students’ use of ChatGPT because it’s considered plagiarism (as are other ways of misrepresenting something that someone else wrote as one’s own work). Higher education leaders and faculty members also are grappling with the tech tool’s ethical implications. Forbes Contributor Chris Westfall recently wrote about a survey in which nearly three-fourths of professors indicated they were concerned about collegians using the AI tool to cheat.

ChatGPT shouldn’t be banned in businesses, but corporate leaders must resist using it to voice their perspectives on and commitments to DEI. They have to be mindful of what the first letter in AI stands for — employees neither want nor deserve artificiality in the DEI communications they receive.

Follow me on Twitter or LinkedInCheck out my website