SUBSCRIBE NOW TO GET 7-DAY FREE TRIAL

AI Takes Charge of Video Synthesis and Music Editing: First AI Sci-Fi Film “Genesis” Trailer Stuns the Audience

This article is translated from “机器之心”.

Recently, a less than one-minute science fiction movie trailer called “Trailer: Genesis” has gone viral on social media.

What’s even more “sci-fi” is that everything of the movie, from image and video synthesis to music and editing, was done by AI.

The creator, Nicolas Neubert, listed the corresponding AI tools used, including Midjourney for image processing, Runway for video processing, Pixabay for music, and CapCut for video editing.

Midjourney is a well-known AI drawing tool that has been updated to version 5.2. Runway is an AI-based video production and generation tool, and the Gen-2 version is currently available for free trial. CapCut is a free editing tool for everyone, but users can also choose to edit in Adobe Premier or Final Cut Pro.

It is understood that Neubert spent 7 hours on this project. Midjourney generated 316 prompts, enlarged 128 images, Runway generated 310 videos, and there was one video with text. In total, 44 videos were used in the trailer.

Today, Neubert wrote a detailed article explaining the production process of “Genesis,” including the specific workflow and how to use the aforementioned AI tools. Let’s take a look.

Regarding the idea for the film, he said that his dystopian theme was inspired by several movies he had seen, and he based the story on them.

The first step in the official production was to build the world and the story.

For the plot of the trailer “Genesis,” Neubert wanted to gradually increase the tension. Therefore, he defined the following three stages:

1. Setting the scene
2. Introducing the threat
3. Climax in the CTA (Call to Action)

Specifically, Neubert first created a draft of the trailer, including the message of “sharing everything, bearing the consequences, and calling for human action.”

After defining the overall tone, he started generating corresponding scenes around these themes. Neubert scrolled through a large number of human and sci-fi clips related to the environment, military technology, and combat themes, and collected them to form a story.

To add some depth, he also added three shots of children with glowing talismans, implying a deeper storyline.

The second step was to generate continuous images in Midjourney.

Here, it is important to note the prompts. Neubert optimized the stable prompts he had obtained in previous posts and created a template that could be reused in each shot of the trailer. The template is as follows:

___________, Star Wars, finely detailed crowd scenes, simple naturalism, blue and yellow, frostpunk, indoor scenes, cinestill 50d-ar 21:9-original style

For each scene, he filled in the blanks with the scenes he wanted, ensuring the maximum coherence of the theme, color, and lighting with other tokens.

In addition, by using the Strong Variations feature, it became easier to create different scenes while retaining the previous color palette. The scene of a female warrior could be transformed into a scene of an ordinary citizen, a cyber hacker, or a battle scene without generating new prompts.

The third step was to generate animated images in Runway.

Neubert found this step to be the easiest. In terms of settings, he always tried to activate the Upscaled mode. However, this mode often encountered facial issues, so he usually used the standard quality for character shots.

It is worth noting that he did not use a combination of text prompts and image prompts. Instead, he dragged and dropped an image and regenerated it until he was satisfied with the result.

The final step was post-editing in CapCut.

While Midjourney and Runway were generating outputs, Neubert first placed the key scenes that he knew would play an important role. For the trailer, he believed that outdoor shots would be the opening scene.

Then he started planning the text. When positioning the text based on the music, there were no clips in the timeline, which was possible. In less than an hour, he wrote and positioned the content according to the timeline. This was helpful for generating images, as he needed an additional fixed point to consider which scenes were still missing.

The specific steps became simple: generate clips → import them into CapCut → place them on the timeline, and slowly piece together the story. He also matched the colors of 2 to 3 editing packages to make them look more like grand movie scenes.

The only skill required when using CapCut was synchronizing the clips with the beats. When the music had a “BWAAA” sound, he always tried to connect the action within the clip or arrange the clips behind it. This made the entire sequence more immersive.

In addition, Neubert also considered how to add a lot of action in the editing. Here, he used two techniques to increase the action.

The first technique: Runway would receive an image and then calculate which parts should be animated based on the model. He reverse-engineered this idea and tried to output images in Midjourney that implied motion. This meant that motion blur could be added to the shots or still images capturing the movement of the head or characters.

The second technique: When analyzing the Runway videos, he found that there were often significant changes in the scenes within 4-second clips. Therefore, in the trailer scenes, he only used complete 4-second clips twice. All other clips were 0.5-2 seconds long, and their speed was increased by 1.5-3 times. The reason for doing this was that as a viewer, you could only see very short clips, so you would perceive more motion in the scenes, essentially fast-forwarding that part.

After all the operations, the final result presented to everyone was the stunning “Genesis” trailer at the beginning. The trailer has also received praise, with some saying that it is the best Runway-generated video they have seen so far.

Want to receive TechNavi Newsletter in email?