Use this button to switch between dark and light mode.

5 Steps to Developing an Ethical Approach to AI

October 28, 2024 (5 min read)
AI can be a great tool to help efficiency for organizations, but you need to make sure you are addressing ethical concerns.

AI offers significant opportunities for companies, but there are ethical concerns around the fast-emerging technology. Many of these concerns come back to the data being leveraged by technology, including how that data was collected and how it is being used.

In this blog, we look at the benefits for companies of prioritizing ethics when bringing in AI and data. We then outline five steps to follow to develop an ethical approach, with help from LexisNexis®.

The ethics of data and AI­--and why it matters

The use of AI presents companies with reputational and strategic risks. Axa’s Future Risks Report surveys people from 50 countries about their main concerns in the coming year. The risks associated with big data and AI rose from 14th place in 2022 to 4th in 2023, and 70% of the public said AI research should be stopped altogether. This has major implications for companies. Customers increasingly fear their personal data is being used in AI tools for ends which are unclear and, if they are not satisfied that a company is using technology and data ethically, they will move their business elsewhere.

One reason for the public uncertainty around technology is that AI models are typically seen as a “black box” and the rationale behind their findings is not usually provided. This is particularly true of generative AI solutions. For example, Google’s Gemini generative AI tool was criticised in February 2024 for producing images which its CEO described as “unacceptable” and “problematic text and image responses”. He noted that “no AI is perfect”.

Prominent technology companies are currently facing lawsuits over allegations they inappropriately scraped individuals’ data from the internet, especially from social media accounts. Others have faced regulatory scrutiny over alleged copyright, data protection, and cybersecurity breaches. Any company seeking to bring in AI technology and power it with large data sets must therefore consider the risks involved and prioritize ethics.

It is clearly important for companies to take an ethical approach to data and technology, yet only 15.9% of data and technology leaders surveyed for the 2024 Wavestone report said the industry has done enough to address the ethical side. This presents an opportunity for companies that can gain public trust in their approach. Drawing on data from the LexisNexis Future of Work Report 2024, which surveyed over 500 executives about their current and planned use of AI, we suggest the following steps:

Step one: Source credible data from original sources

Over 70% of executives told the LexisNexis survey that using trusted and accurate data sources could improve the overall level of trust in how their company uses generative AI. This data should not only come from accurate and credible sources, but it must be sourced and delivered in a way that adheres to regulations and obtains permissions from the publishers and rights holders. Approval should be sought from publishers to use their content in specific AI tools such as generative AI.

MORE: How third-party data helps your company gain a market advantage

Step two: Ensure human oversight

A benefit of AI is that it can surface insights from large data sets which would be difficult or impossible for humans to find. But ensuring that staff members are overseeing the data sources used and reviewing the outputs of AI remains important. This provides an effective counterweight to some of the risks of the ‘black box’ of AI. 97% of respondents to the LexisNexis report agreed that human validation of generative AI’s output is important.

MORE: Why you need to fact-check AI-generated content for misinformation

Step three: Establish ethical guidelines

86% of executives said it will be crucial to establish ethical guidelines and standards around generative AI. This is true for a company’s approach to technology more broadly. Any firm should already have broad values and ethical principles and agree about how these should be applied to the use of technology and data. Examples of guidelines which some firms have implemented include:

  • Only using data from original sources.
  • Requiring all staff to undergo training in ethics of AI.
  • Setting up a committee to consider the ethics of every proposal to use AI for a business need.
  • Using a Retrieval-Augmented Generation approach in a generative AI tool to mitigate the risk of AI hallucinations.

MORE: From start to finish: Your checklist for responsible AI

Step four: Communicate transparently

Nearly three-quarters of executives told the LexisNexis survey that better transparency and explanations of decision-making about their use of generative AI will foster trust in the technology and, by extension, the company. The CEO must clearly set out the importance of ethics and the company’s efforts to address potential issues. Communications should go to customers, employees, investors, third parties, and even regulators.

MORE: Managing collaboration and communication in research

Step five: Discuss ethics with third parties

A company’s efforts to demonstrate ethics and transparency can be instantly undone if one of its third parties or suppliers is implicated in allegedly unethical behaviour. As a result, a company should set high expectations for the ethical standards of all prospective and current third parties. Making this part of the contract can help to formalise this expectation.

Companies should also carry out thorough due diligence on any prospective third-party providers of data or technology. Our free checklist outlines 10 questions to ask any data and technology provider.

MORE: If you haven’t addressed these third-party risks, you’re behind the curve

LexisNexis supports an ethical approach to technology with credible data for AI and generative AI

As an established data provider for over 50 years, LexisNexis has extensive, long-standing–and in some cases, exclusive–content licensing agreements with publishers worldwide. We supply data to enable you to advance your goals while recognizing and respecting the intellectual property rights of our licensed partners. From data acquisition to customer onboarding, we pride ourselves on offering data which is up-to-date, compliant with licensing agreements and applicable laws, and safeguarded by robust data security and privacy measures. Our transparency around our trusted data should give you confidence to leverage it for your own AI initiatives and analysis.

Our extensive news coverage, enriched with robust metadata, is readily available for integration into your generative AI projects. Over the past year, we have worked diligently and transparently with our publishers to secure the rights to use their data with generative AI tools. Our portfolio covers over 20,000 licensed titles, with thousands of sources available for use with generative AI technology. The generative AI-enabled dataset includes content from industry giants like The Associated Press, McClatchy and more.

Download our free ebook, Harnessing Data for AI Innovation, to learn more about the how your company can exploit AI’s opportunities and manage its risks with high-quality data.