Have summaries of our latest blogs delivered to your inbox, so you can stay up to date on the topics and current events that matter to your business.
The finance industry is up against massive challenges as 2023 shapes up to be a tumultuous year for the market. The closure of Silicon Valley Bank signaled another big fall in trust, and businesses are...
Over the past few months, companies from a wide range of jurisdictions and industries had to pay hundreds of millions of dollars in fines for alleged failures of compliance and due diligence. In this blog...
Are you ready to take your PR campaign to the next level? With a solid strategy in place, it's time to focus on spreading your message far and wide. From creating press materials to amplifying your...
Nonprofit organizations play a vital role in society by providing services and support to those in need. The past few years have been particularly challenging for nonprofits, as they have had to navigate...
The Federal Deposit Insurance Corp. (FDIC) made international headlines when it announced Friday, March 10 that it was closing the Silicon Valley Bank (SVB). This is the largest U.S. bank failure of the...
Content created by Artificial Intelligence proliferates at a remarkably rapid pace. AI’s power to transform our informational landscape is immense, due to the constant overflow of information. AI analysis is more than twice as fast as human discovery and gives us a wellspring of optimized data that allows us to expedite decision making. However, if the AI dataset is flawed, it can drown out fact with misinformation and disinformation, as its spread is exponential.
With the increased prevalence of disinformation on topics ranging from politics to finance and beyond, it is easy to feel overwhelmed. So, how does one confidently wring out the fact from the fiction when overly saturated with information?
In this article, we will dive deeper into the role of AI in the disinformation landscape, examples of misinformation spread by AI-generated articles and the need for fact-checking articles. We will also show how Nexis is a powerful and sophisticated tool for you to confidently and consistently search smarter, empowering you to easily discern fact from fiction.
Artificial Intelligence plays an outsized role in the disinformation landscape of our daily lives, affecting everything from personal beliefs to economic decisions. AI is incredible at quickly and deftly analyzing immense data sets and learning language, but we are dependent on those who create AI and teach it to do so in good faith.
Challenges in combatting AI-created misinformation
The race to get information out quickly can cause problems, especially when AI is not fact-checked in preference for speed. When the information proliferates fast enough to seem valid, articles are picked up by local news and radio. Despite broadcaster claims that they are verifying the information still, the false information may be all an individual hears and shares. The damage can be done on a mass-scale in minutes and is rarely reversible.
In a more problematic use, criminals and criminal enterprises can intentionally embed malicious code into articles to be used for cybercrimes. AI can create fake profiles presenting false information that can shift financial markets, influence foreign affairs and introduce false social movements. This creates confusion by mimicking the language of major news sources and implementing data sets that affect machine-learned algorithms. The mimicry makes it hard to discern the source of the misinformation from those who rapidly shared it unwittingly.
MORE: How misinformation spreads on social media--and how to combat it
We have all done an internet search that yielded results with pages and pages worth of sources, some from well-known entities and many of them from unknown sites and authors. Whether you’re searching for information on finances or world news, the validity of the articles you engage with is paramount.
Recently, many financial publications have used artificial intelligence to write articles about the ways financial services work. For example in a CNET AI-generated article explaining interest rates for high-yield savings accounts and car loans, the article incorrectly implies that you will earn double your principal investment, rather than stating your year-end account total will be your principal investment plus the acquired interest. It is missing the nuance and specificity of language to properly illustrate how investment returns and loan interest accrue.
The misinformation in the article might cause someone to believe they are going to to base their loan decisions on false information, therefore declining to work with the financial institution. If these articles of misinformation are posted on the company’s website, the client may believe the institution is engaging in bad business--thus driving clients away.
On the other end of the spectrum, AI-generated articles and photographs can be used to promote misinformation and publicize false public opinions. In 2019, The Associated Press unwittingly reviewed fake LinkedIn profiles of seemingly legitimate journalist, analysts and consultants. The profiles included AI-generated photographs and bios that added to their authenticity and credibility, despite them being fake people operated by individuals with malicious intent.
These deceptive practices have an incredible power to influence public opinion and decision making. Unknowingly, someone may form an opinion be based on frequent falsehoods flooding their feeds regarding subjects from people and policy to finance and science. Articles in bulk, rife with misinformation or disinformation, can override fact if they are shared widely, causing people to make decisions that are not aligned with their best interests and needs.
MORE: The consequences of sharing misinformation
AI is not able to make complex judgements about the truthfulness of its created statements and articles. Because the training data used to teach is sometimes noisy (i.e. labeled incorrectly, whether intentionally or not) or the data set is too small, it causes a lack of accuracy.
AI needs a large enough set of data for the system to properly learn. When unverified and false information end up in these data sets, the problematic material grows at an exponential rate.
AI doesn’t create the problem, but it does magnify existing issues and biases. L. Song Richardson of UC Irvine School of Law writes that issues we find in current algorithms won’t be unlike issues in the real world, for example: the lack of accountability for existing issues of racial and gender biases in employment.
Through the implementation of intensive and multilayered fact-checking, we can change the disinformation landscape for the better. However, creating fact-checking on the scale of AI-generated articles is a massive challenge because:
MORE: 4 causes of misinformation that block business success
The race for fact-checkers to keep pace with the mounting information is already difficult and is only going to get harder, which is why you need the right research tools.
Nexis® is an unrivaled resource for fact-checking AI articles as it is the only competitive intelligence platform powered by the largest collection of global content including Intellectual Property, litigation, and M&A news. Our content universe helps you easily find answers with:
Check it out for yourself with an instant free trial of Nexis® and search smarter, not harder.