What AI Owners Can Learn From Journalism

By on January 30th, 2025 in Articles, Artificial Intelligence (AI), Case Studies, Commentary, Communication Technology, Ethics, Human Impacts, Magazine Articles, Privacy & Security, Social Implications of Technology, Societal Impact

As Artificial Intelligence (AI) owners face complaints of fake or biased information, they may take inspiration from one industry that has been battling these issues for more than a century—the news industry. Journalists have learned that consumers of news want unbiased, factual information, and they built a code of conduct to enforce the delivery of such content. They have learned through the years that when they abandon their commitment to accuracy and other elements of their ethical code, their subscribers will seek it elsewhere. AI owners may consider creating their own content standards to gain loyalty from their users.

The process of advancing AI technology and making AI-generated content attractive to and available to the general public is comparable to that of the printing press and its pivotal influence on newspapers and journalism. Like AI owners today, publishers at the time did not start off with a full-fledged set of standards and ethics to guide them. Some newspaper owners pushed sensational content to attract paying readers, resulting in the creation of the derogatory term “yellow journalism” to describe inaccurate or misleading content [1]. Over time as sensationalism escalated to becoming a significant threat to credibility, journalists and newspaper owners agreed to set standards for content creation that benefited the consumer of the content while building reliable reputations.

(AI) owners could take inspiration from the news industry.

Legislation of content was largely limited to copyright, slander, and libel. Otherwise, accuracy was left to the media institutions to self-regulate, while reputation protection became the strongest motivator for maintaining ethical standards.

Today, the field of journalism is flooded with initiatives, research, and proposals regarding how to protect media from the lack of ethical standards for AI-generated content, but there is little or no commentary on how AI owners might emulate what the media industry did to set reputation-building standards.

Here are some common tenets in these ethical codes that hold relevance for owners and users of AI-generated content.

Accuracy

Editors have argued that the single most important principle in journalism is accuracy. The Editor-in-Chief emeritus at Bloomberg News, Matthew Winkler, interviewed and hired hundreds of reporters across his career. He usually asked one question that would result in an automatic rejection if the candidate got it wrong. The question was “What is the most important thing in journalism?” The right answer was “accuracy,” which this author got right, kick-starting her 18-year career at the organization. Accuracy and factual reporting are the backbone for building and maintaining trust with the content consumer. Accuracy is ensured through mechanisms that include the following.

Editors

Editors serve as gatekeepers for content. They are the first line filter after the journalist. Editors will monitor for grammar and factual accuracy. They will employ a series of flags to check for accidental inaccuracy. Some editorial tasks are programed right into desktop tools, while most traditionally remain with human editors.

Fact-Checking

Publications sometimes have separate departments that check the accuracy of statements by researching databases and calling sources to confirm information. Indeed, some publications have fact-checking reporters who fact-check other news sources and publications and publish their findings.

Sourcing

Publications have specific rules for sharing their sources. Bloomberg, for example, requires citing a source before publishing. Exceptions are considered when revealing a source may cause the source harm. This decision often requires a rigorous review process and approval from an executive editor [2], [3].

At Bloomberg, the journalistic ethics were shared beyond the newsroom to the daata-handling departments in the form of “principles. See Figure 1 for how these principles were shared on a mousepad.

Figure 2.AI-generated image using Bing, powered by Dall-E 3. Image requested by Marybeth Sandell. A request was for an image using newspapers and computers on a desk.

Figure 3.AI-generated image using Bing, powered by Dall-E 3. Image requested by Marybeth Sandell. A request was for an image of a futuristic newspaper.

Publications employ ombudspersons, who handle complaints from the public. They also have a broad mandate to maintain ethical standards within an organization.

Fireable Offenses

Journalists who do not adhere to these standards are relieved of their employment. Jayson Blair, formerly of The New York Times, was dismissed for fabricating content, events, and sources. In an attempt to restore its reputation, The New York Times revealed details of what Blair did to deceive the readers and how The New York Times will endeavor to ensure similar situations do not arise [4].

Transparency

Transparency is about showing your work and your sources, so the consumers of the content can make their own decisions about the information. Transparency gives the consumers the power to check the accuracy themselves across various media. In research, transparency is usually revealed in footnotes and citation practices. In journalism, it is part of the text and can be revealed as follows.

Citing Sources

Citing sources means revealing the name and location of the source of the material and information used. If the source content is digital, then it necessitates a link directly to the source.

Revealing Omission

When information that may be pertinent cannot be secured, journalistic standards require revealing details about why the information could not be secured. A typical example is when the reporter writes that a person “did not respond to requests for an interview.”

Transparency of Ownership

If the publication is owned by a party that may have an interest in the topic or may gain or lose something because of the news, this fact must be revealed.

Conflict of Interest

All potential conflicts of interest are expected to be revealed or avoided. A reporter should not interview their own relative for a story or use a good friend as a source of information.

The process of advancing AI technology and making AI-generated content attractive to and available to the general public is comparable to that of the printing press and its pivotal influence on newspapers and journalism.

Libel or Do-No-Harm

Similar to the Hippocratic Oath, journalists follow the principle of conducting interviews and research so as not to use information gathered in a manner that would harm the people involved. Academic research has similar guidelines for its information-gathering processes. Here are some examples from the journalism industry.

Use of Adjectives and Adverbs

Journalists are trained to avoid using adjectives and adverbs that are not clearly backed up with facts to avoid unintended bias. Calling something “tall” is relative and may be misconstrued from its intended meaning. Saying something is 165 cm explains exactly what it is to a person who is 200 cm, as well as one who is 150 cm. It is “tall” to one but actually “short” to the other. Thus, all adjectives and adverbs are potential flags for misleading content.

Clarity Between Opinion and Fact

The news profession often uses labels for clarity. An opinion piece or a column is clearly labeled Opinion and not News. Labels can be used to show accuracy and provide transparency.

Respecting the Individual

When reporting court cases, for example, journalists also use labels to define a defendant as alleged so as to ensure that readers do not inadvertently believe that they are guilty. Also, when writing about a person, the reporter endeavors to give that person time to comment or respond before publication.

Accountability

Like other industries, there are laws that govern content creation by journalists. These often revolve around libel. Publishers can take out libel insurance to cover their exposure to risk. In AI-generated content, it remains unclear where accountability lies. Will libel for AI-generated content be accountable to the maker of the model or the user of the model? This is ultimately defined by legal processes. One can argue that the action of an agent in regard to their ethical standards, or lack thereof, can impact the cost of accountability. The AI industry is just starting its journey down this long and winding road.

Example

An example worth considering as a discussion prompt is the lawyer who recently sued an airline on behalf of the client [5]. The lawyer submitted a brief that included a number of relevant court decisions. However, it was later revealed that no one could actually find the decisions cited in the brief. The lawyer had used OpenAI’s ChatGPT to do his research. Ironically, the lawyer even asked ChatGPT to verify whether the cases were real. The program confirmed that they were real. However, they were not; a fact confirmed when the judge went looking for them and found nothing. They were fabricated by AI. Had this been content created in journalism, many of the tools used to guarantee accuracy would have caught the erroneous information before publication. For AI owners, the tools could be applied to the point of entry of data into the model, the model itself, or to post-production filters. AI owners could also create an ombudsperson system for fielding reports of erroneous or harmful content from end users. This example provides just one instance that can prompt more nuanced ethics-based discussions. AI owners can also look to other industries that deal with content for inspiration. It would be beneficial for those in industries besides journalism, such as the real estate or legal professions where content is created for contracts, to consider what could be extrapolated from their standards to help benefit the AI industry. In fact, any industry that could incur risk with their actions can provide ethical inspiration.

While there is much discussion about AI needing ethical standards, there is less discussion about how to implement them.

Ownership and Implementation

While there is much discussion about AI needing ethical standards, there is less discussion about how to implement them. Reports from the World Economic Forum and European Union, for example, call for accuracy and transparency and address bias and libel—all issues similar to journalism—but they do not address how the AI industry might set up safeguards. For real change to happen, AI owners must take accountability for their own conduct and build a mechanism for ethical adherence from the inside. They can look to journalism and take inspiration from other content-generating industries for appropriate and useful tools.

Journalists are busy updating their own standards to include how to deal with AI. What would they propose if they could impact the content that AI is creating and if they could help the AI owners set up their own system of adhering to standards?

Journalism’s codes of ethical conduct have developed, and been refined, over many decades [6]. These processes of refinement have resulted in a system of content generation with a high level of public trust and reliability. Yet, such trust did not come overnight; it took time.

AI owners should consider installing a code of conduct and content ombudsmen that would help ensure that ethical standards are applied at all levels of the AI process from the selection for content input to the output of content to the user.

While not discussed directly in this article, one can argue that the news industry itself is facing challenges adhering to its own ethical conduct as fake news and biased news creep back into the media. AI owners should observe this shift and its impact on its audience or customers so that they can endeavor to avoid losing credibility in this manner. Should they try to build an ethical code of conduct, it must include mechanisms for adherence to conduct that is transparent to the user and that does not erode from neglect.

Journalists today are joining calls for strict regulation of AI content. However, it would be interesting to hear if their views would change if they were invited to help build a self-regulating system from within as they did for themselves. AI owners should consider inviting them in for advice.

ACKNOWLEDGMENTS

Figures 2 and 3 were created using Bing, powered by Dall-E 3. The images were created using text prompts.

Author Information

Marybeth Sandell is the chief content officer at Arctic Today, Anchorage, AK 99592 USA, a publication that covers security, business innovation, and travel in the countries that touch the Arctic and works to support freedom of speech for reporters in exile in the region. Email: mary@arctictoday.com.

 

_______

To view the full version of this article, including references, click HERE.

_______