By Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston
The journalism industry has been under immense economic pressure over the past two decades, so it makes sense that journalists have started experimenting with generative AI to boost their productivity.
An Associated Press survey published in April 2024 asked journalists about the use of generative artificial intelligence in their work. Nearly 70% of those who responded said they had used these tools to generate text, whether it was composing article drafts, crafting headlines or writing social media posts.
A May 2024 global survey conducted by the public relations firm Cision found the slice to be somewhat smaller – 47% of journalists said they’d used generative AI tools like ChatGPT or Bard in their work.
But does the adoption of the technology pose any moral questions? After all, this is a business where professional ethics and public trust are especially important – so much so that there are fields of study devoted to it.
Over the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been researching the ethics of AI.
I think that if journalists are not careful about its deployment, the use of generative AI could undermine the integrity of their work.
How much time is really saved?
Starting with an obvious concern: AI tools are still unreliable. Using them to research background for a story will often result in confident-sounding nonsense. During a 2023 demo, Google’s chatbot, Bard, famously spit out the wrong answer to a question about new discoveries from the James Webb Space Telescope.
It is easy to imagine a journalist using the technology for background, only to end up with false information. Therefore, journalists who use these tools for research will need to fact-check the outputs. The time spent doing that may offset any purported gains in productivity.
But to me, the more interesting questions have to do with using the technology to generate content. A reporter may have a good sense of what they want to compose, so they will ask an AI model to produce a first draft.
This may be efficient, but it also turns reporters from writers into editors, thus fundamentally altering the nature of what they do.
Plus, there is something to be said for struggling to write a first draft from scratch and figuring out, along the way, whether the original idea that inspired it has merit. That’s what I am doing right now as I write this piece. And I’m sad to report that I discarded quite a few of the original arguments I wanted to make, because as I tried to articulate them, I realized that they didn’t work.
In journalism, as in art, generative AI emphasizes – indeed fetishizes – the moment in which an idea is conceived. It focuses on the original creative thought and relegates the tedious process of turning that thought into a finished product – whether it’s through outlining, writing or drawing – to a machine.
But the process of writing out a story is inseparable from the ideas that give rise to it. Ideas change and take shape as they are written out. They are not preexisting entities patiently floating around, perfectly formed, simply waiting to be translated into words and sentences.
AI’s undermining of a special relationship
To be fair, only a portion of the journalists in both surveys were using generative AI to compose drafts of articles. Instead, they were using these tools to accomplish other tasks, such as writing newsletters, translating text, coming up with headlines, or crafting social media posts.
Once journalists see that the AI is quite talented at writing – and it is getting better and better at it – how many of them will resist the temptation?
The fundamental question here is whether journalism involves anything more than simply conveying information to the public. Does journalism also entail a kind of relationship between writers and their readers? I think it does.
When a reader regularly follows the analysis of someone writing about the Middle East or about Silicon Valley, it is because they trust that writer, because they like that writer’s voice, because they have come to appreciate that writer’s thought process.
If journalism involves that kind of relationship, does the use of AI undermine it? Would I want to read journalism created by what amounts to an anonymized aggregation of the internet any more than I would want to read a novel created by an AI or listen to music composed by one?
Or, stated differently, if I read a piece of journalism or a novel or listened to a musical piece, which I thought was created by a human being, only to find out that it was largely generated by an AI, wouldn’t my appreciation or trust of the piece change?
If the practice of journalism is based on having this kind of relationship with the public, the increased use of AI may well undermine the integrity of the practice, particularly at a time when the industry is already dealing with trust issues.
Being a journalist is a noble calling that, at its best, helps sustain democratic institutions. I assume that this nobility still matters to journalists. But most readers probably would not trust AI to uphold the social role that journalism plays.
AI does not care that “democracy dies in darkness”; it does not care about speaking truth to power. Yes, those are cliches. But they are also widely held precepts that sustain the trade. Journalists neglect them at their peril.
Vrvirus and Boy Anthony (via Shutterstock)
Originally published on The Conversation under a Creative Commons license as Newsrooms are experimenting with generative AI, warts and all
Support evidence-based journalism with a tax-deductible donation today, make a contribution to The Conversation.