Have summaries of our latest blogs delivered to your inbox, so you can stay up to date on the topics and current events that matter to your business.
Financial crimes, such as bribery and corruption, are becoming more common and more complex. One of the most common reasons for a company to become implicated in alleged financial crime is its exposure...
Global companies have been fined hundreds of millions of dollars for alleged compliance breaches in the last year. Whether the allegations against them related to bribery and corruption or breaches of...
Due diligence is a crucial step of any company’s business plan, especially when working with third parties like donors, board members or vendors. And within this often-overlooked sector is an even...
In those weeks leading up to Thanksgiving 2012, no one could have predicted how the first #GivingTuesday would fundamentally change the fundraising landscape, offering an opportunity for people all over...
According to our research, 88% of publicly traded companies had Environmental Social Governance (ESG) programs in 2022. Initiatives related to sustainability are becoming undeniably important—but...
Content created by Artificial Intelligence proliferates at a remarkably rapid pace. AI’s power to transform our informational landscape is immense, due to the constant overflow of information. AI analysis is more than twice as fast as human discovery and gives us a wellspring of optimized data that allows us to expedite decision making. However, if the AI dataset is flawed, it can drown out fact with misinformation and disinformation, as its spread is exponential.
With the increased prevalence of disinformation on topics ranging from politics to finance and beyond, it is easy to feel overwhelmed. So, how does one confidently wring out the fact from the fiction when overly saturated with information?
In this article, we will dive deeper into the role of AI in the disinformation landscape, examples of misinformation spread by AI-generated articles and the need for fact-checking articles. Let's get started.
Artificial Intelligence plays an outsized role in the disinformation landscape of our daily lives, affecting everything from personal beliefs to economic decisions. AI is incredible at quickly and deftly analyzing immense data sets and learning language, but we are dependent on those who create AI and teach it to do so in good faith.
The race to get information out quickly can cause problems, especially when AI is not fact-checked in preference for speed. When the information proliferates fast enough to seem valid, articles are picked up by local news and radio. Despite broadcaster claims that they are verifying the information still, the false information may be all an individual hears and shares. The damage can be done on a mass-scale in minutes and is rarely reversible.
In a more problematic use, criminals and criminal enterprises can intentionally embed malicious code into articles to be used for cybercrimes. AI can create fake profiles presenting false information that can shift financial markets, influence foreign affairs and introduce false social movements. This creates confusion by mimicking the language of major news sources and implementing data sets that affect machine-learned algorithms. The mimicry makes it hard to discern the source of the misinformation from those who rapidly shared it unwittingly.
MORE: How misinformation spreads on social media--and how to combat it
We have all done an internet search that yielded results with pages and pages worth of sources, some from well-known entities and many of them from unknown sites and authors. Whether you’re searching for information on finances or world news, the validity of the articles you engage with is paramount.
Recently, many financial publications have used artificial intelligence to write articles about the ways financial services work. For example in a CNET AI-generated article explaining interest rates for high-yield savings accounts and car loans, the article incorrectly implies that you will earn double your principal investment, rather than stating your year-end account total will be your principal investment plus the acquired interest. It is missing the nuance and specificity of language to properly illustrate how investment returns and loan interest accrue.
The misinformation in the article might cause someone to believe they are going to to base their loan decisions on false information, therefore declining to work with the financial institution. If these articles of misinformation are posted on the company’s website, the client may believe the institution is engaging in bad business--thus driving clients away.
On the other end of the spectrum, AI-generated articles and photographs can be used to promote misinformation and publicize false public opinions. In 2019, The Associated Press unwittingly reviewed fake LinkedIn profiles of seemingly legitimate journalist, analysts and consultants. The profiles included AI-generated photographs and bios that added to their authenticity and credibility, despite them being fake people operated by individuals with malicious intent.
These deceptive practices have an incredible power to influence public opinion and decision making. Unknowingly, someone may form an opinion be based on frequent falsehoods flooding their feeds regarding subjects from people and policy to finance and science. Articles in bulk, rife with misinformation or disinformation, can override fact if they are shared widely, causing people to make decisions that are not aligned with their best interests and needs.
MORE: The consequences of sharing misinformation
AI is not able to make complex judgements about the truthfulness of its created statements and articles. Because the training data used to teach is sometimes noisy (i.e. labeled incorrectly, whether intentionally or not) or the data set is too small, it causes a lack of accuracy.
AI needs a large enough set of data for the system to properly learn. When unverified and false information end up in these data sets, the problematic material grows at an exponential rate.
AI doesn’t create the problem, but it does magnify existing issues and biases. L. Song Richardson of UC Irvine School of Law writes that issues we find in current algorithms won’t be unlike issues in the real world, for example: the lack of accountability for existing issues of racial and gender biases in employment.
Through the implementation of intensive and multilayered fact-checking, we can change the disinformation landscape for the better. However, creating fact-checking on the scale of AI-generated articles is a massive challenge because:
MORE: 4 causes of misinformation that block business success
The race for fact-checkers to keep pace with the mounting information is already difficult and is only going to get harder, which is why you need the right research tools. Open web searches take time, and they aren't always verifiable. Conversely, a specialized research platform that includes global news data from a variety of sources will allow you to cross check your facts and make sure that the content you share is accurate.
Furthermore, with dedicated research platforms, you can set up alerts to keep track of any popular topics and include visualizations of trending content, related topics and top sources. This takes away your need to manually research all of your topics, allowing you to spend more time investigating new stories while feeling confident in the accuracy of the information you're sharing.