OK

Computer?

Public attitudes to the uses of generative AI in news

What we were asked to do

​​While much of the news industry has set its sights on generative artificial intelligence and how it can support the production and distribution of news, little attention has been paid to how audiences feel about AI-powered innovation in journalism.

Our recent study, conducted with the Reuters Institute for the Study of Journalism, aims to redress that balance by taking a select group of news consumers on a journey – to understand for themselves how AI might be used in journalism, and to develop informed opinions around the potential risks and benefits.

We used a deliberative methodology that allowed participants to really engage with the detail of how AI could be used in journalism, exploring how news consumers form their attitudes, how they feel about different uses of AI in news, and why. 

  • What kinds of applications are news audiences comfortable with and what applications, to the contrary, spark unease? 

  • How should news organisations think about the disclosure of AI use in newsrooms? 

  • And how might the use of AI impact—and be shaped by—trust in news?

Beyond initial reactions prompted by generalised suspicion and concerns about complete automation of journalistic content, the findings paint a nuanced picture, highlighting applications that are more and less likely to sit well with audiences. 

The full report details when, where and how AI can be safely integrated by publishers - and conversely where caution should be advised. Crucially, it also explains why.

five key findings from the research

  • At a time when most people have little to no technical knowledge about or direct experience with generative AI, perceptions of these technologies draw heavily on popular culture, media narratives and the conversations in their everyday lives. Mediated discourse about AI is often largely negative and colours how people think about AI in news specifically, which they typically approach with suspicion.

  • We identified three broad types of people, defined by a range of factors: 

    • Traditionalists tended to be most fearful towards technological change and had lower levels of knowledge and experience AI

    • Sceptics were cautious and critical in their outlook towards technology and were more clued up on generative AI and LLMs

    • Optimists were generally trusting of technological progress and most focused on personal benefits.

  • Initial negative reactions to the use of AI by journalists typically defaulted to the assumption that AI would be used for content creation. As participants interacted with more use cases, their attitudes became more nuanced and on balance, more positive. In addition to cultural and personal factors, comfort levels depended on four key elements:

    • Where in the process of news creation and distribution generative AI is used 

    • The type of information being conveyed and whether human interpretation and feeling was required/desired 

    • The medium itself – text, illustration, photos, videos were viewed differently

    • How much human oversight there would be.

  • Participants viewed oversight as a principle of good journalistic practice more generally, and especially so when it came to AI. However, the expected level and nature of oversight varied by where in the process generative AI is used.

    Just like comfort levels vary across different parts of the news creation and distribution process, participants felt disclosure (e.g. labelling) was less important for some behind the scenes tasks than it was when AI was used to deliver content in news ways, and especially to create new content. People saw disclosure as much less important when AI is used to assist journalists compared to when it is used to augment or automate content creation.

  • Attitudes towards AI, in general and in relation to news, will most certainly continue to evolve as people become more accustomed to, informed about, and experienced with AI. We are at a critical juncture that presents news organisations with both opportunities and risks.

    Participants were for the most part still making up their minds on how they felt about generative AI in general and in news. Uses of generative AI are developing and people are becoming more aware of them. Audiences told us that their trust in information could go one of two ways, depending on how things develop.

    In one scenario, trust in all information decreases, the goal of much disinformation is realised – people doubt everything, trust nothing, no provider of information is automatically afforded more trust. People disengage from news, politics and the democratic process. Some of our younger and more sceptical participants were here already (and not because of generative AI, or at least not only because of generative AI).

    In another scenario, where information in general is less trustworthy, trusted providers could be valued even more. Here, trust in newsbrands goes up or stays the same. But that trust has to be earned, re-earned and maintained. 

    With the incorporation of generative AI into the production and distribution of news, we are at an inflection point – where trust can be earned or lost. How newsbrands respond at this critical juncture will go a long way to influencing which scenario will become pre-eminent.