This article isn’t a press release for good generative AI news, nor is it a bashing of publications that aren’t employing the best practices while using generative AI. There’s a trace of both but they’re necessary for my end goal: to optimistically, yet cautiously muse on the future of AI in publishing through the lens of current events.
The good news: The Arena Group, which operates 250 brands including publications like Sports Illustrated, Parade, The Street and Men’s Journal, announced that it will provide Jasper to its teams to help them generate more content more efficiently. Not only does that content include articles, but also videos, newsletters and marketing campaigns. And in fact, The Arena Group has already been experimenting with generative AI to build stories like this one for Men’s Journal “Proven Tips to Help You Run Your Fastest Mile Yet”.
The bad news: The day before this announcement, Red Ventures was exposed for silently using an internally-built generative AI tool that company leadership knew was flawed to write 77 articles for its brand CNET. More than 50 percent of those stories — some of which were about loans, interest rates, savings accounts and other things that affect real peoples’ money — had factual inaccuracies and instances of plagiarism.
Once the inaccurate stories came under public scrutiny, CNET’s Editor In Chief Connie Guglielmo released details about their generative AI test. She said that every story received (human) edits before publishing but it’s clear now that those edits weren’t quite enough in many cases. To allow that many errors to make it to publication means edits were likely done at a quick glance without the due diligence necessary to catch plagiarism and one of AI’s biggest imperfections right now: presenting incorrect information as authoritative truth. CNET invested energy in its AI but not enough in human oversight of it.
Jasper’s partnership with a massive media organization like The Arena Group is a big step forward for generative AI. With it in the hands of skilled media professionals at publications visited by millions each month, the technology is validated at a key moment during this early adoption stage. But that optimism is also tempered by the unfortunate news coming out of Red Ventures and CNET because that situation simultaneously invalidates generative AI both in and by the media.
One of AI’s biggest imperfections right now: presenting incorrect information as authoritative truth. CNET invested energy in its AI but not enough in human oversight of it.
Many are worried that the more AI gets integrated into the mediascape, the more we are at risk of CNET-like situations happening regularly. And this would mean less trust in generative AI and the media (and too many distrust both already). We don’t want to let either of those outcomes happen.
Following the statement from CNET’s editor stating that their AI-generated pieces were edited before release, writer Jon Christian brought up an excellent point in a piece for Futurism: “If these are the sorts of blunders that slip through during that period of peak scrutiny, what should we expect when there aren't so many eyes on the AI's work? And what about when copycats see that CNET is getting away with the practice and start filling the web with their own AI-generated content, with even fewer scruples?”
Generative AI companies and the media businesses that use them have a shared responsibility to seal these cracks quickly before more faulty content leaks out and floods the web.
Humans Aren’t Going Anywhere
Much of this conversation — my point — revolves around this: generative AI cannot not work without humans both now and in the future. What happened at CNET is an answer to the question many are asking: “Will generative AI replace real writers?” Nope, and now you can start to see why. Errors happen when humans, particularly those with solid editorial skills and instincts, are removed from content processes that involve gen AI. And sadly, content suffers and trustworthiness is diminished as a result.
A lot of what separates good and bad instances of generative AI use in the media (and overall) comes down to accountability on the part of the humans, and the publications employing them, that are using it. This is particularly important right now since this technology is still imperfect and all eyes are on this industry. No one is mistake-proof and articles that need corrections get produced all the time. I’ve definitely done it. But AI-written stories need an extra level of human oversight, which shouldn't be too hard to ask for since the technology frees up so much time in the overall content creation process.
Thankfully, we are on the road to making this accountability happen. Following the initial news of CNET’s dodgy situation, Guglielmo said her team will be more transparent in detailing when AI influenced a story. They’ll also provide the name of the human editor checking the AI’s work for more accountability. The Arena Group has been practicing similar levels of transparency already with their AI-generated content for Men’s Journal.
This technology is guaranteed to improve so that its inherent errors will diminish with time. However, businesses that choose to fire or circumvent their editorial teams and replace them with gen AI will undoubtedly produce content of diminished reader-value compared to businesses that don’t. Ross Levinsohn, chairman and CEO of The Arena Group, offered similar sentiments following the announcement of the company’s partnership with Jasper.
“While AI will never replace journalism, reporting, or crafting and editing a story, rapidly improving AI technologies can create enterprise value for our brands and partners,” he said in a press release.
“It’s not about ‘crank out AI content and do as much as you can,’” he told The Wall Street Journal. “Google will penalize you for that and more isn’t better; better is better.”
I’m a content writer, at a generative AI company no less, who was hired to do exactly what I’m doing now: strategize and write using my brain and Jasper’s. Jasper isn’t being used as a stand-in for an actual writer and I’m not using Jasper to be a one-man content factory. Jasper allows me to do my job ([try to] offer thought-provoking, human perspective to the world of generative AI) more efficiently. But it could never do that job for me because it’s missing that essential human element. AI content can be good but it needs some kind of human touch to be “better,” as Levinsohn pointed out. And it’s because of the fact that gen AI needs skilled people behind it so badly that I’m not worried about the technology uprooting my professional future.
There is a world where people can immediately learn that a mass-media article was written either in part or in full by AI, but that doesn’t make it seem any less trustworthy. What Red Ventures did with CNET’s gen AI experiment unfortunately set that future back a bit. But the transparency that the Arena Group is already practicing in this area, combined with their partnership with Jasper, will bring that future closer.
There’s always an ebb and flow of scrupulous and unscrupulous behavior around any new technology. Right now, we collectively want to do things “by the book” but the challenge is that the “book” is still being written. We have a shared responsibility to ensure that the figurative ‘Book of Ethics for Generative AI in the Media and Beyond’ we’re writing is rooted in accountability and the need for human intervention throughout the content creation process. We also have to make sure that as many people and businesses read and follow that book as possible.
If we hold ourselves and one another to higher standards of accountability, then we can work toward creating a world where the media, generative AI and the intersection of the two are absolutely trustworthy.