The Call Is Coming from Inside the House
Even when companies forbid AI, writers and artists face strong incentives to use it anyway
The Chicago Sun-Times wound up with a highly realistic digital simulation of egg on its face last month, when readers noticed that a “summer reading list” they’d published in a special summer supplement —as apparently had other major papers—was full of completely imaginary books by famous authors. The whole feature, it turned out, had been written by a large language model—and as LLMs are won’t to do, it had hallucinated out of thin air such titles as Isabel Allende’s Tidewater Dreams and Pulitzer-winner Percival Everett’s The Rainmakers.
You might reasonably imagine this is yet another case of greedy corporate overlords eager to replace human writers with AI slop to save money. But in this case—and in a lot of similar cases where commercial art or writing is exposed as AI-generated—you’d be wrong. Partly because the Chicago Sun-Times is run by a nonprofit, but mostly because the feature was assigned to (if not actually penned by) a real live human being: veteran journalist Marco Buscaglia. Buscaglia had decided to cut corners in order to hit his deadlines and quickly take on the next paying gig, which he has since admitted is something he does routinely—though he insists that normally he at least checks over the LLM’s output to make sure it’s accurate. The Sun-Times had bought the whole supplement package from King Features Syndicate, which has a policy against undisclosed use of AI by writers. The Sun-Times assumed (perhaps reasonably, but apparently wrongly) that the syndicate had already done their editorial due diligence.
This fits something of a pattern we’ve seen in recent years: A company whose formal policies forbid AI-generated art or writing are caught publishing it anyway, and it turns out have been the result of a freelancer or contractor looking to lighten their workload. DC Comics has on several occasions had to cancel or pull work by artists alleged to have used AI to produce published comics covers. Wizards of the Coast, the company behind Magic: The Gathering and Dungeons & Dragons, swore off AI after discovering one of their artists had used AI tools on an illustration published in one of their D&D sourcebooks… then had to shamefacedly own up to running AI marketing materials produced by an outside vendor.
It’s not all that hard to imagine how a freelancer like Buscaglia might slide down a slippery slope that ends with self-destructively outsourcing your own creative work to chatbots. Perhaps you start just using one for purposes virtually nobody would object to: transcribing interviews, brainstorming story ideas, identifying relevant articles and books for background research. Then, impressed with how useful it seems, and confronting a tight deadline, you let it “clean up” a rough draft you’ve gotten bored of after hammering away for hours. Hey, not that different from running a spelling and grammar check, right? You editors are none the wiser, and indeed, delighted with how much faster you’re turning around copy these days.
Well then, why not let it do a little more of the work? Feed it an outline and your notes, maybe some interview fragments, and let the LLM worry about the slog of turning those raw materials into a polished piece of prose. The ideas and information still came from the human author, right? The chatbot has sped up your workflow so much that you’re able to take on significantly more paying assignments—which is a big help with journalism facing ever tighter economics and freelancers having an ever tougher time making a viable full-time career of it. Gradually it starts to feel precious and old-fashioned to insist on spending hours doing by hand what the machine can do in seconds—and seemingly just as well, at least as far as the editors are concerned. And anyway, at this point you’ve taken on so many assignments that the LLM’s “assistance” is no longer really optional.
None of this is by way of excusing this sort of scam, but you can see how a lazy writer would incrementally rationalize it. And how the companies buying it might similarly have an incentive to turn a blind eye, formal prohibitions on AI notwithstanding.
In Buscaglia’s case, after all, if an editor with even a passing interest in contemporary fiction—never mind an honest-to-God literary editor—had taken a cursory glance at that “reading list,” they’d have instantly smelled a rat. “Huge celebrity authors like Isabel Allende, Percival Everett, and Andy Weir all have new books that I’ve never heard of? Weird, I’d better look that up.” It’s clear not much more human brainpower went into editing the piece than writing it. And maybe that’s partly by design.
After all, while it makes sense for the formal policy to prohibit freelancers from using AI so long as many readers are powerfully hostile to paying for machine-generated output—and plenty of editors personally dislike the idea as well—there’s not a ton of incentive to rigorously enforce that rule provided the nominal human author is doing the bare minimum amount of work to avoid hallucinatory embarrassments like this one. That’s particularly likely to be the case when it comes to breaking news reporting, where a reporter who’s dictating notes on their scoop into an AI agent is always going to be able to have a finished article ready a half hour before the competitor who’s relying entirely on their meat brain to get it done. That’s such a significant advantage in terms of traffic—especially if you can do it consistently—that whatever their personal feelings, editors will have overwhelming incentives not to get too nosy about exactly how their stringers are turning copy around so quickly, especially given there’s no really reliable way to police it in the case of writing (and as the algorithms improve, AI generated art will likely get increasingly hard to detect as well).
The short of it is, I suspect this is a pattern we’re going to see recurring a lot—and indeed, probably already are “seeing” a lot of and not recognizing it: Institutions adopt formal policies barring undisclosed AI use by writers and artists, but the individual artists and writers face enormous competitive incentives to skirt those rules (albeit hiding it better than Buscaglia did), and the institutions have little motive and, in fairness, limited practical ability to vigorously enforce their own official policy without resorting to a degree of micro-managerial surveillance that would be even more offensive than the problem it aims to solve.
I think the appropriate collective response here is twofold. First, we probably need to acknowledge that for certain use cases, like the immediate summary of a reporter’s notes on a breaking story, the competitive advantage from using generative AI is probably going to be too great to make a blanket prohibition stable, and accept a certain amount of that being done openly. There’s nothing more corrosive to professional norms in general than a scenario where everyone knows the putative rule is being routinely flouted. Second, for the remaining cases we need a broad consensus that undisclosed resort to AI is the kind of professional breach that, like inventing sources or breaching confidentiality, gets you permanently blackballed.