[INTERVIEW] European Personal Data in the USA: a transatlantic legal saga, with Florence G’sell
19 April 2024
[ARTICLE] What future for the SREN law?
15 May 2024

[INTERVIEW] What impact will AI have on media? Interview with Craig Forman

Interview by Eléonore de Vulpillières

Will AI kill the press or allow it to improve? In this interview, Craig Forman discusses the impact of AI on the media, in terms of the reliability of information and public confidence.

What are the main changes that artificial intelligence is bringing to the way today’s media work? How does it affect the traditional way in which they operate? 

Early forms of artificial intelligence (prior to the development of generative AI) have been used for years to both create and distribute online news and information. Larger newsrooms have leveraged automation for years to streamline production and routine tasks, from generating earnings reports and sports recaps to producing tags and transcriptions. These practices have now made their way to many smaller newsrooms. AI has also been used by technology companies to automate certain tasks related to news and information, such as recommending and moderating content as well as generating search results and summaries. 

Recent developments of far more sophisticated “generative AI” systems have raised concerns about their potential for destabilizing white-collar jobs and media work, abusing copyright (both against and by newsrooms), giving the public inaccurate information and eroding trust. At the same time, these technologies offer new opportunities. They create new pathways for sustainability and innovation in news production, including generating newsletters, distillation and combing of archived materials, pitching stories, language translation for further distribution and more. 

These developments also introduce new legal and ethical challenges for journalists, creators, policymakers and social media platforms. This includes how publishers use AI in news production and distribution, how AI systems draw from news content and how AI policy around the world will shape both. 

New uses of AI in journalism will require new kinds of disclosures and transparency with the public. It is important that these disclosures be developed in a way that is meaningful to the public and helps them evaluate that individual piece of journalism.

Some analysts argue that artificial intelligence could reduce the pluralism and diversity (or at least the impact) of the information provided by the media by encouraging extreme personalisation of content, via filter bubbles. What is your view on this? 

Challenges around extreme personalisation and filter bubbles are serious, and will impact the ways our societies function, share ideas and find possible solutions to current challenges. Such walled-off consumption can also result in less interest in — and thereby revenue for — a wide range of information. However, these concerns have existed prior to the use of AI in the media. The news media already face these challenges in the news industry’s current business model amid dwindling local, independent news outlets. And selective media exposure along partisan or generational lines has been on the increase for years alongside the expansion of choices and increased political partisanship. One question we should focus on together is whether there are ways that AI could actually help diversify exposure — and increase the importance the public places on quality news content.

The algorithms used by social media platforms can often favour the dissemination of sensationalist or polarizing content designed to generate more engagement. Could this trend have an impact on the media landscape as a whole? 

Again, this is not a phenomena unique to AI. Algorithms and the challenge of social media platforms promoting content that generates more engagement have impacted the media landscape for years. Yes, this trend could have an impact on the media landscape as a whole. It is critical that policymakers, technology companies and researchers collaborate to ensure algorithmic selection incentivizes high-quality information as much as it can without crossing the line into content moderation problematic to freedom of expression, press independence and the public’s access to a plurality of fact-based news. Some level of increased transparency in the algorithmic selection process would also be helpful — again to that the benefit outweighs potential for gaming of the system by those with ill-intention. 

Digital platforms are the central location for many people around the world to find and share news and information. The extensive reach digital platforms have to share information has in many ways greatly expanded the reach of quality news, the ability to focus on minority communities and more, but algorithmic uses also carry the potential for results to favor one outlet or type of content over another — as well as to reinforce an individual’s predilections. Greater transparency of algorithmic processes is important to help mitigate some of these risks, though requirements and/or standards should be developed in a way that still safeguards privacy and guards against possible uses to sow harm.

Does the increasing integration of artificial intelligence into the media influence public confidence in the information consumed? Are there risks of manipulation or bias that could emerge from this convergence? 

Most research to date (of which we need more!), finds general distrust among the public of AI created content. What is less clear, though, is how much of that negative sentiment is largely due to press coverage and other discussions to which the public has been exposed. The public’s lack of awareness about the array of uses of AI in news — and ways it has been used for years — including those that are beneficial to getting high quality news to the public in timely ways, has negatively influenced the public’s confidence in news content

Technology companies and civil society have begun to incorporate labels or watermarks into content to try and provide transparency to the public. As CNTI wrote about it in its most recent report related to AI in News, watermarks can be helpful but they are not a silver bullet for managing harms and enabling benefits of AI in journalism and can in fact backfire when it comes to trust in journalistic news coverage. Preliminary research by colleagues Felix Simon and Benjamin Toff suggests that watermarks and labels denoting AI use in journalism can actually lead to less trust from the public if the sources used to create the content are not provided. It is critical that the public is provided with a more nuanced style of labeling so that they can accurately evaluate which content to trust and which not it. Further research and experimentation is critical to determining which kinds of labels will have the greatest value. 

Faced with the emergence of AI in the media, what measures do you think media professionals and regulators should take to guarantee the integrity, diversity and reliability of the information they broadcast? What are the ethical and legal challenges posed by the use of AI? How could artificial intelligence be used to improve the quality of media content and promote more balanced and accurate information? 

Media professionals, technology companies, researchers and regulators must work together to ensure management of AI — whether via legal policies or other kinds of standards — works towards the integrity, diversity and reliability of information. CNTI recently hosted a global convening of thought leaders from all areas on this exact topic. We extract five key themes: 

1. There is no silver bullet for addressing the harms of AI while still enabling its benefits. Several limitations and uncertainties with current labeling techniques were shared, including that preliminary research finds that labels related to AI in journalism can actually lead to less trust from the public; current labeling techniques don’t differentiate between uses of AI that help inform rather than disinform; and intended audiences and identification elements (provenance, watermarking, fingerprinting and detection) need more clarification and understanding. 

2. In addition to labels,we need to experiment with a number of possible tools. A few ideas shared by participants: 

● Secure Sockets Layer (SSL) for AI: Originally designed to be an encryption tool for e-commerce, SSL is now used by virtually every website as a privacy and authentication tool. As one participant stated, there’s no “values determination,” just a determination on the content’s origin. 

● Accessible incentive structures to adopt standards: With a nod to Search Engine Optimization (SEO) as an imperfect example, what structures might work to incentivize participation among all but those with ill intent? And how could we measure its effectiveness? 

● Educating and training the public: Research to date shows that most of the public seems to distrust any use of AI in news content. A more nuanced understanding, through AI literacy programs, is important to allow journalists to use AI in ways that help serve the public. 

3. Policymakers might consider industry standards a start, potential new regulatory structures, and ways they can lead by example. Industry standards have in the past helped create shared and recognized practices that can also evolve over a few years. Such development would be best served by “the involvement of a wide spectrum of stakeholders.” It is also important to reconsider what regulatory structures should look like when technology is constantly advancing. And finally, the point was made that government bodies should themselves be early adopters of various standards, rules or policies. 

4. Technology companies need to be actively involved in creating and providing access to tools “to help the media industry detect what we cannot detect on our own,” and in helping users understand the benefits and risks of certain AI technologies. Technology companies also need to do this collaboratively, working alongside news media, researchers and others. One key area for technology companies to focus on is enhanced transparency, as discussed further above and below, and in collaborative discussions about the use of journalistic material in model building. There is much debate around copyright, fair use and possible licensing. We need to come together to determine both the legal and moral standings on these matters. 

5. More research is critical. Current research is a start — but limited. We need much more data to understand why and how certain policy and technology approaches work better than others. 

6. Newsrooms should use AI to innovate but with a degree of caution. Journalists need to apply layers of transparency in their work, just as they expect from the people and organizations they cover. Consider and address all the elements of AI use, including the actors, behaviors and content a.k.a the “ABC’s of disinformation.” 

Some of the ethical and legal concerns posed by the use of AI are transparency, manipulation, credibility and compensation. On the transparency side, the public has raised concerns over not knowing what content is AI generated or whether content has used copyrighted material to train its Large Language Model (LLM). As I previously mentioned, additional transparency into AI training and the distribution of AI content will be critical to sustaining this digital era of news. Like many forms of technology, some actors are abusing AI and using it to harm others. As you’ve likely seen in the news, there has been a rise in synthetic media like deepfakes. Deepfakes are increasing ethical concerns with AI because actors are using AI to manipulate imagery that may cause reputational harm to others and to spread mis- and disinformation. CNTI’s recent deepfakes primer outlines the need for a collaborative response from governments, researchers, the technology industry, and journalists to mitigate the risks of deepfakes, while considering freedom of expression and safety. 

What are the challenges in terms of privacy protection, manipulation of public opinion and media accountability? 

Concerns with privacy protection of users’ data has grown substantially as our society has become more digitally reliant. Similarly to the issue of transparency with AI, the public and technology companies are facing challenges with protecting users’ privacy online. The issue remains complex because an independent online media is critical to the protection of an open, critically connected internet, but an open internet also increases security vulnerabilities of users’ data. As shown with recent court cases and legislation, there is a continuous debate on whether the digital platforms, government or users themselves are responsible for privacy protection. As governments around the world work to regulate internet governance, they must ensure policy frameworks address the distinctions among different forms of internet fragmentation. 

Another highly debated concern with the rise of AI is copyright — both the use of journalistic content in building models and journalists use of other content in their reporting. Concerns have been raised that LLMs are being trained on copyrighted materials and that outputs of content from LLMs infringe on these materials. AI and copyright issues are very complex and vary by country. As a first step, is it critical that AI developers are transparent and give credit to copyright owners when applicable. The news media also faces a question around copyright infringement in their reporting. As our society advances digitally, our copyright laws should be modernized in a way that benefits independent, competitive journalism and an open internet. There must be more informed and comprehensive discussions among publishers, technology companies and policymakers to structure new laws in a way that protects journalistic work while also recognizing the ways the public accesses and interacts with creative works in our digital societies. 

AI has great potential to improve the delivery of fact-based, timely and relevant news to the public, as long as the necessary safeguards are put in place. For example, AI has the ability to speed up productivity within journalist’s research and content creation, which can help them with generating stories quickly. This can be very beneficial for journalists as their work is very time sensitive, but it is important that they fact-check and ensure that the content produced is accurate. As previously stated, we should envision a setting in which AI aids journalists, not replaces them. Journalists must also be held accountable to working alongside the AI they use to ensure accurate, fact-based reporting is produced.

Artificial intelligence is sometimes presented as a potential tool in the fight against online disinformation. What are your thoughts on this? To what extent could it really help to solve this problem, and what are its limits? Is there a risk of censoring information that does not meet certain criteria? 

AI can certainly serve to be two sides of the same coin. It can — and has been used to create more — and a wider range of — disinformation online, such as with deepfakes and other forms of synthetic media. Deepfakes have recently been used during the U.S. presidential nomination process and in European politics (for example, in Slovakia and U.K.). The increase in deepfakes has even led to a voluntary European Union code for political parties to not employ deceptive content during the EU elections later summer as well as calling on technology platforms to begin labeling AI-generated content. On the other hand, AI models also retain a wide variety of data and can serve as a tool to identify disinformation where a human may not be able to. There will always be a need for humans to help generate and verify information, but AI can and should be responsibly used as a tool to help that process. 

Whether disinformation was AI generated or not, mitigating disinformation also poses the risk of restricting press independence or free speech. To ensure that these fundamental human rights are protected, publishers, platforms and policymakers must share a responsibility to respond to growing concerns around disinformation, especially in this election year. 

How can traditional media adapt to the age of artificial intelligence to remain relevant and competitive in a constantly changing media landscape? What would be the best recipe for harmoniously combining the use of AI and human intelligence? 

I’m not sure there is a singular “best recipe” for AI and humans to work completely in harmony. Experts and researchers are still learning and analyzing the new and emerging technology, and it will take additional time and research to conclude how AI is best integrated in an AI-enhanced news landscape. What seems counterproductive is to ignore it or push against any integration of AI in society. A number of extremely positive impacts have already been seen in other parts of society such as health science, entertainment and even the arts. Instead we should work together across industries and geographies to a) create greater understanding, b) create greater equality of access, c) provide nuanced labeling around the types of uses and d) experiment with new uses of human intelligence to further advance society. The same is true for how AI gets utilized by the news industry. Creative thinking about the ways it can help journalists gather information and produce even more valuable reporting that supports an informed society

Are there any concrete examples where artificial intelligence has been used successfully to improve media operations or efficiency (in the USA / Europe / Asia…)? What are the results and implications of these use cases? 

Yes, there are many concrete examples. Many newsrooms have begun using AI to run simulations to test and improve headline writing by copyeditors, and to create auto-summaries of long articles. For years, newsrooms have also used AI to draw inferences from massive datasets and to manage the challenges of such giant content-management and research tasks as understanding relevant vectors of inquiry in for instance, the Pulitzer-Prize-winning ‘Panama Papers’ coverage. 

More are included in CNTI’s recent report: Watermarks are Just One of Many Tools Needed for Effective Use of AI in News. 

* One that has a particular relevance for local newspapers is digitizing archives. With the help of AI technology, media organizations can efficiently digitize archives and use these data for future reporting or research. This easy access to archives is meaningful in the media industry because local news coverage tends to cover topics and issues at a local level that do not receive national coverage. 

* Schibsted, a Norwegian media and brand network, is organizing the development of a Norwegian large language model (LLM). It will serve as a local alternative to other general LLMs. 

* A participant in CNTI’s recent convening shared an innovative use of AI in Zimbabwe in which the AI model has been trained using local dialects. The chatbot is more representative of the users in that region when compared to other general AI language models. 

* One example of innovative use of AI in content creation is at the Baltimore Times which used AI to better connect with their audience. 

Participant Aimee Rinehart shared a blueprint for her CUNY AI Innovation project. This project aims to create a journalism-specific LLM AI model for journalists and newsrooms to use. The Tow Center for Digital Journalism at Columbia University released a recent report by Felix Simon titled “Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena.”


Craig Forman is an American entrepreneur, media executive, and former foreign correspondent who served as chief executive officer of The McClatchy Company. He previously worked at The Wall Street Journal. He is currently a partner at NextNews Ventures, an early-stage private investment fund based in San Francisco. Forman has been a non-resident fellow at the Shorenstein Center at Harvard University’s Kennedy School of Government. He co-founded, and is Executive Chair of the Board of the Directors of the Center for News, Technology & Innovation, an independent global policy research center that seeks to foster independent and sustainable media, maintain an open internet, and promote informed conversations about public policy.