Your current research focuses on generative AI’s impact on news. What do you see as the biggest challenges this technology poses for the accuracy of information and news, and what are the major policy opportunities in Australia and globally?

Although there is a great deal of experimentation going on in newsrooms, there is also an acute awareness of the risks of generative AI’s use in news, such as inaccuracy, bias, misrepresentation, issues of copyright and defamation. Nearly all newsrooms have at least informal policies clarifying the need for strict editorial oversight and transparency where AI is used. Most of these policies rule out the use of AI in news output without human oversight, and some prohibit it altogether. Of course, mistakes will happen, but unless editorial oversight is significantly weakened, this shouldn’t present a policy or regulatory issue.

A more troublesome problem for newsrooms is the proliferation of AI-generated content outside the newsroom, for example in stories that are taking off on social media. These can present difficulties for traditional verification processes, which have already occasionally been found wanting even without AI-generated content, such as with the tragedy of the Bondi stabbings in April. There’s also been an explosion in what’s been called ‘AI slop’: low-quality content, sometimes masquerading as news, or even news copied from other sources and rewritten by AI. This is generally just a form of clickbait designed to sell advertising, but it can also be more insidious, such as when it is used to manipulate political discourse. These are not necessarily separate policy problems from misinformation more generally, but it’s safe to say that AI is contributing to an extent, to the deterioration of the online information environment.

Perhaps the largest immediate concern for the news industry is the use of professional news to train AI systems. The release of generative AI saw a flood of copyright claims as AI systems were trained on publicly available news content. Since then, we are increasingly seeing large AI developers signing deals with major news companies for the use of their archives. This provides a source of quality information for developers and some recompense for news companies. Whether the balance is right remains to be seen, and we may need to await clarity on whether AI training breaches copyright. That is an issue that the Australian Government is currently considering on multiple fronts, such as copyright and news policy.

With your background as both a senior policy officer and a researcher, what key takeaways have you learned in your career about effectively influencing policy?

At the Australian Communications and Media Authority, I could see a direct impact on a number of policy areas. For example, our work in ACMA’s disinformation taskforce was very important in informing the government’s policy approach and the regulatory proposals currently before parliament. That’s partly down to circumstances – it’s such a new area of policy and regulation, but our recommendations to government were based on extensive research, close policy analysis and broad consultation, as well as close work with key stakeholders. These are all critical to developing good policy.

From the researcher side, it can sometimes be difficult to say whether the work we do effectively influences policy. Many researchers are actively engaged in the policy arena, making many submissions annually to government consultations and parliamentary inquiries. Sometimes, our research is taken into account; in other cases, it’s not. This, of course, depends on a great many factors outside our control. It’s important that we continue to engage for those windows of opportunity when we can influence policy.

Which practices or approaches have you found most helpful in ensuring your work has the greatest impact on policy processes?

At the UTS Centre for Media Transition, we tend to adopt a pragmatic approach, aiming our recommendations at areas where we believe there is some flexibility in the policy approach, or where attention is most needed. Once the policy machinery has rolled into gear, particularly where it involves legislative reform, it can be difficult to change direction entirely.

We also frequently host and participate in multi-stakeholder discussions across government, industry and academia, both in Australia and the region. While this is important for our research, being actively engaged may also influence how receptive policymakers are to our work.

How can researchers and policymakers collaborate to make change happen in this space?

With so much policy development going on in the communications space – particularly on digital platforms – collaboration between researchers and policymakers is increasingly important.

Government-funded external research on particular policy areas is an important point of collaboration between government and academia. In such a fast-changing space, researchers can fill gaps in government and regulator knowledge, providing empirical evidence, rigorous analysis and comparative studies. Importantly, our work is independent of the views of government, industry and other stakeholders.

Media and communications are a politically fraught area, with vocal concern over free speech protections while balancing pressing online harms. There’s a danger of reactive policymaking, which can lead to piecemeal, ineffective or unworkable approaches. It is critical that government provides adequate opportunity for consultation in parliamentary inquiries and other formal processes. Academic input is most effective when there is sufficient time to develop fully considered responses to policy and legislative proposals.

How does your research help policymakers understand how people interact with information online?

In my view, it is critical to take a broad environmental or ecosystem approach to problems of information integrity. Treating misinformation as purely an online content problem is unlikely to be effective.

We need to understand the broader sociopolitical context, we need to understand the economics of the platform economy, and we need to understand what US researcher Renée DiResta calls the ‘infrastructure of influence’. Digital platforms play an important role in the proliferation of misinformation, and we should make them accountable for that role through well-designed, systems-focused regulation.

But we also need to look at other factors at play. These include the role of legacy media, political influencers and government. Increased accountability in these areas is also needed, given the slow but inexorable slide in public trust. We should also think about the importance of supporting quality, independent news, as well as academic research. These are critical to improving the quality of the public sphere.

Photo Credit: Rosa Alice/UTS Centre for Media Transition

More stories

  • Meet the UNSW researcher advising policymakers on early intervention to improve children’s mental health

    29 July 2024

  • Charles Sturt University researcher informing policy to protect biodiversity

    13 May 2024