A US newspaper published a summer reading list featuring non-existent books because it relied on artificial intelligence to generate the copy. The freelance writer responsible for the content, Marco Buscaglia, admitted to the Chicago-Sun Times that he “failed” to vet the output of the chatbot he used.
While all the authors on the reading list do exist, many of the titles and book descriptions were fabricated. Min Jin Lee does not have a novel called “Nightshade Market” set in Seoul’s underground economy; Rebecca Makkai has not written a book about a climate scientist called “Boiling Point”; and Andy Weir did not write “The Last Algorithm,” a thriller about an AI that develops a consciousness and secretly influences global events.
The reading list was included in a 64-page section of the Chicago-Sun Times called “Heat Index: Your Guide to the Best of Summer,” the entirety of which was produced by King Features, a subsidiary of Hearst. The special section was also syndicated to at least one other major regional newspaper, The Philadelphia Inquirer.
Buscaglia’s byline features multiple times in the supplement, and he has acknowledged to the Sun-Times that he used AI for his other stories as well. He is currently reviewing those pieces for potential errors, as is Chicago Public Media, the newspaper’s owner. The Atlantic has since identified numerous fabricated quotes, experts, and citations within the Heat Index that are demonstrably fake.
King Features said it will terminate its relationship with Buscaglia, stating that he failed to disclose his use of AI — a violation of company policy. The Sun-Times has replaced the section in its e-edition with a letter from Chicago Public Media CEO Melissa Bell and announced that print subscribers will not be charged for the May 18 edition. Going forward, the paper will also be explicitly third-party editorial content and enforce compliance with internal editorial standards.
“It is unacceptable that this content was inaccurate, and it is equally unacceptable that we did not make it clear to readers that the section was produced outside the Sun-Times newsroom,” Bell told the Sun-Times.
Hallucinations are a risk for the growing number of news outlets using AI
This is not the first time that a news outlet has blamed a third party for publishing inaccurate AI-generated materials. Sports Illustrated and Gannett, which owns USA Today and other publications, have previously attributed fake authors and reviews to a company called AdVon Commerce, according to The Verge.
Hallucinations — a term for AI-generated false or invented content — remain a fundamental challenge. OpenAI’s gpt-4o-mini has demonstrated error rates of up to 79%, and studies show that hallucination frequency is increasing.
As financial pressures grow, more newsrooms are approving AI tools to accelerate production. In February, The New York Times publicly confirmed its use, joining the Associated Press, The Guardian, and News Corp. According to a 2023 report by a London School of Economics think tank, more than 75% of journalists, editors, and other media professionals use AI in the newsroom.
Hallucinations are not the only problem; AI-generated journalism often lacks depth, nuance, and relevance. When an Italian newspaper published an insert entirely written by AI, its article on Donald Trump began with a vague statement about the president’s notoriety, resembling scraped or aggregated content, and lacked both an Italian viewpoint and a clear editorial voice.
AI can also introduce bias, even if the model itself is inherently neutral. In March, an AI summarisation tool used by the Los Angeles Times misinterpreted a series of articles as expressing sympathy for the Ku Klux Klan, when in fact the coverage sought to highlight how historical narratives had downplayed the group’s ideological threat.
In January, Apple faced criticism after its AI-based news summarising tool misrepresented information reported by reliable news sources, including BBC News and The New York Times, and sent inaccurate warnings to iPhone users.