AI Invented the Books, Chicago Sun-Times Still Printed Them

Published: May 22nd, 2025.
If you’ve ever wondered what the future of journalism might look like with artificial intelligence in the mix, this week brought a pretty awkward preview.
On May 18, the Chicago Sun-Times published a summer feature titled Heat Index: Your Guide to the Best of Summer. At first glance, it looked like a harmless seasonal guide—book recommendations, lifestyle tips, cultural snippets. But readers soon noticed something odd: some of the books on the list didn’t exist. In fact, only five of the 15 recommended titles were real. Neither did some of the experts being quoted. Entire segments were generated using AI, and nobody had caught it before printing.
The response from the Sun-Times was swift. They removed the section from digital editions, clarified that it was third-party content from a national syndicate, and apologized. But even that explanation raised eyebrows. After all, how does something like this slip into a legacy publication known for its journalism?
It turns out the section wasn’t created by the newspaper’s staff but by freelance writer Marco Buscaglia, working through King Features Syndicate, a Hearst-owned content provider. Buscaglia later admitted to using AI to generate background material—and, more importantly, failing to verify it. Some AI-generated content included book summaries for titles that don’t exist and quotes from experts who appear to be entirely fictional.
In his own words, Buscaglia said, “I do use AI for background at times, but always check out the material first. “This time, I did not, and I can’t believe I missed it because it’s so obvious. No excuses. On me 100%.”
There’s an easy reaction here: blame AI. But the problem isn’t that AI was used. It’s because nobody checked the work. The issue wasn’t technological—it was editorial. And that distinction matters.
AI tools can be incredibly useful. They can speed up workflows, suggest ideas, and even summarize data. But they’re not good at the truth. They don’t know what’s real and what’s not. That’s still a human responsibility—and when it’s missing, the consequences aren’t just factual errors. They’re breaches of trust.
This wasn’t just a bad day for the Sun-Times; it was a reputational wound. Readers expect that when something shows up in a trusted outlet, it’s been vetted. Whether it came from a syndicate or not, it was printed in their paper.
This moment taps into a broader anxiety about where journalism is headed. With shrinking newsrooms, tighter budgets, and increasing pressure to publish more with less, AI can look like a solution. It’s fast, cheap, and scalable. However, it can also become a liability without strong standards and review.
It’s tempting to see this as a one-off mistake, but it’s not. Other outlets—like Gannett, Sports Illustrated, and even the Philadelphia Inquirer—have recently faced similar issues with AI-generated content. In each case, it wasn’t just about the tech. It was about people cutting corners.
And here’s the uncomfortable truth: mistakes like these don’t just undermine credibility for the outlets involved. They chip away at trust in journalism as a whole.
The Sun-Times says they are reviewing editorial policy and re-evaluating third-party partnerships. That’s a start. But this incident raises bigger questions: How should newsrooms use AI, if at all? What standards should exist for labeling or disclosing its use? And how can readers know when a machine made something?
There’s no perfect answer yet. But what’s clear is this: a journalist or any writer's job isn’t just to deliver words. It’s to care whether those words are valid. AI can assist with a lot, but it can’t care. That still has to come from people.
And if journalism loses that, then no algorithm in the world can save it.