Use this button to switch between dark and light mode.

Everything to Know About Generative AI in Media Production

If you’ve logged onto the internet in 2023, you’ve probably heard about AI advancements. From appealing AI filters on social media to the March 14, 2023 release of GPT-4, artificial intelligence (AI) has become all the rage, and that’s not likely to end anytime soon.

But what does AI have to do with film and media production? Generative AI, a category of intelligence that creates content like text and imagery, could be an industry disrupter for filmmakers and producers. Studios and independent creators alike have the potential to use generative AI for production and editing, but there are plenty of downsides to the tech advancement to look out for.

Here, we’ll walk through what generative AI is and how it can be used in media production, plus how it’s already being used in the industry that you might not be aware of yet. We’ll also dive into the ethical and business concerns that come along with the growth of AI.

What is generative AI?

Generative AI is a technological tool aimed at changing the way people work. Each AI system is fed a great deal of data and trained to spew produce factual, quality output. According to Reuters, this AI type “creates brand new content - a text, an image, even computer code.”

It’s different than other form of AI because it generates outputs instead of “simply categorizing or identifying data like other AI,” Reuters reported. Discriminative AI, another major category in the ever-growing space, is meant to split apart data by a specific boundary. Some examples of discriminative AI are decision trees and logistic regressions.

Examples of generative AI, on the other hand, include things like image generators and language modelers. Generative AI might take an input of ten photos of a specific person and then create 50 headshots that look similar to that person as the output. Or, a generative AI system like ChatGPT might be able to take a simple question and write out an answer based on similar text that’s been inputted as data for the program.

How is generative AI used in Media Production?

Filmmakers are already likely thinking about how this could apply to media production. Generative AI, because of its ability to produce one-of-a-kind imagery and copy, could greatly reduce the need for busy work when it comes to filmmaking. It could come in handy in all areas of film production and could reduce the costs and time spent on a film.

Let’s break down the many potential uses by going through how each stage of production might enlist the help of generative AI.

Generative AI in Pre-Production

In the pre-production stage, generative AI might help add extra flair or content to part of the script that needs some elbow grease. Or, if a documentary team needs to write a long voiceover segment using facts and figures from their research, they could use a bot like ChatGPT to generate a basic monologue that they then edit and plus up for the film.

Generative AI on Film Sets

When it comes to the actual production period of a film, generative AI could be incredibly useful. For instance, say a narrative film has a scene where the main character visits his childhood home, and the house needs to contain realistic-looking photographs of the actor at a young age. Set designers could use generative AI to create unique, believable “family” photos quickly and accurately.

Generative AI in Post-Production

Of course, much of the tech budget in a film is applied in its editing phases. Generative AI can come in handy in this stage because of its ability to automate editing or generate special effects. AI could help identify and remove curse words if a film needs to lower its age rating, or it could identify places where the director said “cut” and flag those to editors.

Similarly, AI could create believable special effects such as fireworks or car accidents, using past footage of similar visuals.

Is generative currently being used in media production?

There are already heaps of examples of recognizable film titles that have used generative AI in their production process. One major case of this is in dubbing films for foreign screenings. A company called Flawless created an AI program that automatically syncs actor’s lips to the new language when the film has been dubbed, in a process they call “vubbing.”

As previously mentioned, AI is helpful for reducing swear words. Flawless also created a program which removes unsavory words from movies. They implemented this with the 2022 hit film “Fall” when the director wanted to release the movie under a PG-13 rating but found it had too many instances of the F word.

Miramar is currently producing a film called “Here,” starring Tom Hanks and Robin Wright, and the studio announced that they are using AI to make younger-looking versions of the lead actors. Metaphysic AI’s CEO Tom Graham said, “We can actually do things with A.I. that are impossible with traditional methods to get that really hyper-realistic look, that doesn’t look uncanny, doesn’t look weird, and doesn’t look CGI.”

What does generative AI mean for the future of media production?

Companies like Flawless and Metaphysic are sure to stay on the map for Hollywood and beyond. Future developments in generative AI could lead to even smarter scriptwriting and better creative production from these bots. After all, in just six months, OpenAI’s GPT system went from scoring in the lowest 10% of a bar exam to the top 10%, so the improvements seem to be exponential.

This will likely spill across all aspects of media production, from film studios to newsrooms. Investigative journalists could use generative AI programs to summarize a great deal of research into more digestible bits, or systems like ChatGPT to generate copy for articles. Artists and photographers could use generative AI to replicate or improve upon their work so that one piece of content turns into hundreds of unique and sellable images. Surely, there are opportunities we can’t even imagine when it comes to generative AI.

What are the challenges and limitations of generative AI in Media production?

Of course, if anything sounds too good to be true, there’s something wrong. Generative AI is far from a problem-free solution. One major concern when it comes to AI is about ethics. Whether intentionally or not, Generative AI could widely share misinformation if the information is not verified. While unintentional sharing of AI-generated facts may not be unethical at its core, it can cause greater problems as information gets disseminated.

That said, unethical AI can take on a variety of forms. For example, programs could be fed stolen input text or graphics. A public example of this is when Instagram users were posting their generated AI photos from the AI bot Lensa, but artists realized those photos were made with their artwork, against their permission. The Daily Beast called this “arguably the biggest art heist in history.”

In that vein, while generative AI is helpful in making actors look younger or changing the words they’re mouthing, it’s important to remember that with that power comes great responsibility. Similarly, deepfakes are a growing concern: people can use AI to put public figures in compromising positions, like adult films. They can also generate videos that seem realistic, so a user could make a politician say something they never said, and unaware onlookers may think it really happened.

How to use research to validate generative AI

Like with any new trend, it’s important to do your research. Research can also be used to find the best new technology that could fix problems you didn’t even know you had. For example, being on top of industry trends like the “Flawless” tool will help studios meet the ever-evolving standards for production and could make films more accessible for all audiences.

Additionally, research will help content creators mitigate potential unethical pitfalls created by Generative AI. Because of the potential concerns, it’s imperative to fact-check and thoroughly vet each AI program before using it. Ensuring that your AI was fed proper sources and ethical input is critical. Having news alerts for the names of particular AI programs can be a useful way to be on the lookout for public complaints, and potential legal suits, against the software.

Tools like Nexis, with access to the industry’s largest, diverse collection of high-quality news content and legal data, can help fact check and thoroughly examine these concerns. Using a credible research platform is the best way to ensure that your AI-generated materials are accurate without spending extra time hunting down the sources that you need.

Interested in learning more about what research tools will improve your content? Reach out to us today!

Get in touch

Email: information@lexisnexis.com
Telephone: +31 (0)20 485 3456