Internal AI Transparency Can and Should Be Better

Businesses must increase transparency in AI usage to foster trust and adaptability among employees as AI integration expands.

Published on Nov 08, 2023

HR and workforce solution provider UKG recently surveyed 4,200 professionals — from executives to individual contributors – across 10 countries about how they view and use AI personally and professionally. The results are quite interesting.

78% of C-suite leaders said AI is in use within their organization today. And 71% of those leaders said that AI can offer their business and/or their teams a competitive advantage. So investing in more advanced uses of the technology is a medium-to-high priority for their org.

Regarding employees powering these businesses, the survey offered another, far more concerning statistic: “54% of people say they have ‘no idea’ how their company is using AI, and that lack of transparency is a real problem,” said Dan Schawbel, managing partner at Workplace Intelligence, which partnered with UKG for the study.

Yes, that is definitely a real problem. 

It's especially damning when you consider that, according to another insight from the study, an average of roughly 77% of employees would be more excited about AI if they knew how their company was using it. The same is true if leadership offered guidance on how the technology could improve their workflows. Not only that, but the employees who are willing to embrace AI at work said doing so would increase their job satisfaction and willingness to go above and beyond in their roles. It would also create more trust in their leadership.

So my question (which I'm basically screaming at this point) is: where is the transparency? Why is it so hard for leaders and the companies they run to be more transparent with their staff about AI use? Why is it that transparency is so difficult when there's such a clear interest on both sides in adopting the technology? 

“Organizations must be more upfront about how they’re using AI in the workplace, if they want a competitive advantage and want to earn, and keep, the trust of their employees," said Schawbel.

When AI Transparency in Business Matters

The generative AI tools we’re familiar with today can be used across businesses in really nuanced ways. Some of its use cases are more consequential than others and that might dictate if AI transparency is necessary. So before I dive into my argument, I think it’s important to clarify when I feel that disclosures about AI use in a company makes the most sense. 

Is AI transparency necessary when…

…A lone employee or a small handful of employees uses AI to help in their day-to-day? Maybe. Is that AI-assisted work public-facing or is the technology merely being used as a personal assistant, for example? When creating public-facing content, or work that carries any level of consequence either for the public or the organization as a whole, others within the business should know that AI is being employed. This would allow leadership to issue guidelines around AI’s safe and reliable use across the many places it might show up. But if AI’s influence over content has little to no consequence, transparency probably isn’t necessary. 

AI is rolled out an entire team/department as part of their critical infrastructure; a tool that could impact the team’s (and by extension, the company’s) overall success. Yes. 

AI is used by a few key stakeholders for pivotal, mission-critical operations within the business or it’s baked into mission-critical operations via automation and the like? Definitely. 

In short, if AI’s place in a company — either by one person or an entire team — carries any real weight and could impact the company’s image, profitability, customers, or anything else of real consequence, its use should be disclosed. 

More Transparency Means More Trust

As a leader, being sneaky about how, when, why, and where AI is used at your company can be harmful and rightfully raise eyebrows. 

Take CNET for example. The 30-year-old company silently ran about 70 AI-generated articles on the well-known site starting in November 2022, the same time ChatGPT was released. According to staff, they were blindsided and many were upset. When it was discovered that not only were AI-generated stories being produced unknowingly, but over half of those stories had instances of errors and plagiarism, it caused a significant uproar in the tech world. CNET’s Editor in Chief and Senior VP at the time Connie Guglielmo eventually wrote an article apologizing for the poor AI rollout and promising to develop better AI systems. The company then developed an AI policy. Soon after, a union representing CNET employees planned to negotiate based on a “lack of transparency and accountability from management” around key issues like AI, among others. 

Could all of this have been prevented if CNET leadership had been more transparent about AI use from the beginning? Maybe. Maybe not. But at the very least, more transparency could have prevented some of the resulting internal (and external) distrust. 

Overall, a transparent and well-thought-out stance on AI’s place in a company can do a lot of good. It can:

  • Create a greater sense of trust and empowerment within team members if they can voice their thoughts and concerns over AI best practices. 
  • Give leadership opportunities to teach employees how to evolve their workflows and skills with AI.
  • Ensure that AI policies around topics like content creation, data security, biases, and more align with company values. 
  • Create more external transparency since AI skepticism is still very much alive and more people than ever are asking, “Did AI make this?” An external AI statement is doubly important if your business specializes in creating public-facing content.

The specifics of a single business’s AI stance, guidelines, ruleset, whatever you might call it, can differ a lot depending on exactly how AI could (or shouldn’t be) employed there. All of that can also vary on a team-by-team basis within that organization. This is why getting employee input or that of an AI ethics committee is vital as AI use grows more widespread. 

Here’s Jasper’s AI policy as an example of what one of these policies looks like: 

“We use AI to assist in some content development at our company. To ensure transparency, accountability, quality and privacy, we adhere to internal AI usage standards. These standards help us safeguard against biases, maintain data security, and uphold our commitment to ethical marketing practices. One of these standards is that AI should be used to assist in content creation, not fully automate it. We ensure that every piece of content we develop is shaped and reviewed by people who have an understanding of our audience and AI’s limitations.”

If you’re unsure where to get started on developing AI guidelines that you can share with your team, Jasper developed a template to build off of. 

Download Jasper's AI Policy Template for Businesses

It’s bizarre that so many executives in UKG’s survey said AI is used in their business and about as many employees would be excited about using it if they knew they could. Yet around 50% of those professionals aren’t aware that they could actually be using it, today. Or they could at least offer their perspective on how it’s being used within their company. 

Leaders need to do a better job of closing that gap. There should be more cross-team alignment within department heads on who is using AI, how and why since the work of one team — or a single person — could impact everyone.  Otherwise, leaders risk alienating their staff and their audience, and/or needing to release apology statements. 

a screenshot of Jasper's template for Responsible AI Usage Policy
Jasper's AI policy template

Meet The Author:

Alton Zenon III

Jasper Content Marketing Manager

Want more of The Prompt? Sign up for the newsletter and never miss a story.

Receive a weekly recap of breaking news, insights, and interviews with experts in the world of generative AI.

Ready to create better content with AI?

Discover how 100,000+ copywriters and businesses are using AI content.
languages icon
Works in 30+ languages
fast signup icon
2 minute signup
star icon
Rated 4.8/5 stars in 10k+ reviews