Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more Runway Ai Inc. The most advanced video generation models of artificial intelligence today, as the next stage of the competition entered to create tools that can convert film production. The new GEN-4 system introduces the consistency of the personality and the scene through multiple shots-the ability that escapes from most AI's video generators so far. The New York-based startup company, with the support of Google, NVIDIA and Salesforce, is "Gen-4" for all paid subscribers and institutional customers, with additional features later this week. Users can create five and ten seconds of 720 pixels. The version comes just days after the Openai version is the feature of generating new photos that also allow the character consistency through its photos. The version has created a cultural phenomenon, where millions of users request pictures of GHibli studio through Chatgpt. In part of it was the consistency of the GHibli style through the chats that created anger. The viral trend has become so popular that it was temporarily crashed from Openai's servers, where CEO Sam Altman tweeted "processing our drawings" due to unprecedented demand. Ghibli-STYLE images also sparked hot discussions on copyright, as many artificial intelligence companies interrogated legally distinctive artistic patterns. ] The continuity of vision: the missing piece in the film industry so far So if the character's consistency leads to a tremendous viral growth of Openai's image, can the same thing happen to the included in the video? The consistency of the character and the scene - maintaining the same visible elements through multiple shots and angles - was Achilles heel to generate artificial intelligence video. When the character's face changes skillfully between the discounts or the background element, it disappears without explanation, the artificial nature of the content becomes immediately clear to viewers. The challenge stems from how these models work on a basic level. The former artificial intelligence generators dealt with each frame as a separate creative task, with only loose connections between them. Imagine to ask a room full of artists for each one frame of a movie without seeing what happened before or after - the result will be visually disassembled. It seems that the Gen-4 of Runway has dealt with this problem by creating up to a continuous memory of visual elements. Once created a letter, object or environment, the system can be presented from different angles while maintaining its basic features. This is not just an artistic improvement; It is the difference between creating interesting visual excerpts and actual stories. Using visual references, along with guidelines, Gen-4 allows you to create new photos and videos with consistent patterns, topics, locations and more. Allow continuity and control your stories. To test the form of the form of the model, we assemble ... pic.twitter.com/iyz2Baew2URunway (runwayml) March 31, 2025 According to Runway documents, Gen-4 allows users to provide reference images of topics and describe the composition they want, with artificial intelligence generated consistent outputs from different angles. The company claims that the model can present videos with realistic movement while maintaining the consistency of the topic, subject and elegance. To display the capabilities of the model, Runway released many short films that were fully created with the Gen-4. One movie shows, "New York is a zoo", the visual effects of the model by placing realistic animals in the New York settings. Another, entitled "Return", follows the explorers looking for a mysterious flower and produced in less than a week. ] From the animation of the face to the global models: the development of the film industry from Ai Runway Gen-4 depends on the previous tools of the runway. In October, the Act-One, a feature that allows film makers to capture facial expressions from the smartphone video and transfer them to characters created from artificial intelligence. The next month, Runway added advanced 3D camera controls to the GEN-3 Al-Turbo, allowing users to enlarge and get out of the viewer while maintaining letters. This path reveals the strategic vision of the runway. While the competitors focus on creating one more realistic images or clips, Runway has been assembled by the components of the full digital production pipeline. This approach looks closer to how actual directors work - tackling performance, coverage and visual continuity problems as interconnected challenges instead of isolated technical obstacles. The development of face -to -face animation tools indicates that Runway realizes that the AI -backed film industry needs to follow the logic of traditional production to be really useful. It is the difference between creating an experimental offer for technology and construction tools that professionals can actually integrate into their workflow. Amnesty International Video Battle of Millions of dollars is rising The financial effects are large for the runway, which is reported to raise a new financing round that would estimate the company at $ 4 billion. According to financial reports, startups aims to reach $ 300 million of annual revenue this year after launching new products and a application programming interface for their video generation models. Runway followed Hollywood partnerships, where she got a deal with Lionsgate to create a categorical video generation model AI depends on a studio catalog of more than 20,000 titles. The company also established a hundred films, as film makers presented up to one million dollars to produce films using artificial intelligence. "We believe that the best stories have not been listed yet, but traditional financing mechanisms often overlook new and emerging visions within the largest ecosystem in the industry," explains Runway on its website. However, technology raises fears for the film industry. A study conducted in 2024, commissioned by the Animation Syndicate, found that 75 % of the film production companies that adopted Amnesty International have reduced, unified or canceled jobs. Study projects that more than 100,000 entertainment jobs in the United States will be affected by the Instelligence IQ by 2026. Copyright questions follow the creative explosion of Amnesty International Like other artificial intelligence companies, Runway faces legal audit on their training data. The company is currently defending itself in a lawsuit filed by artists who claim that their copyright work has been used to train artificial intelligence models without permission. Runway cited the doctrine of fair use as a defense of it, although the courts have not yet ruled this application of the copyright law. Publishing rights discussed last week through the Openai Ghibli Studio feature, which allowed users to create pictures in a distinctive style of Hayao Miyazaki animation without explicit permission. Unlike Openai, who refuses to generate photos in the style of artists but allows studio styles, Runway has publicly not explained his policies on imitating elegance. This discrimination is increasingly arbitrary as artificial intelligence models become more advanced. The separation line between learning from the wide technical traditions and the copying of the specified creative patterns has been clarified to the near. When AI can simulate the visual language that took decades from MIYAZAKI to develop it, does it matter whether we were asking to copy the studio or the artist himself? Upon interrogation about training data sources, Runway refused to provide details, noting competitive concerns. This ostrich has become a standard practice among the developers of artificial intelligence, but it is still a point of disagreement for the creators. While exploring marketing agencies, educational content creators, and communications teams for companies how tools such as GEN-4 can simplify video production, the question moves from technical capabilities to creative application. For directors, technology represents the opportunity and disruption. Independent creators can access the possibilities of visual effects previously available only for major studios, while traditional VFX professionals face uncertainty. The uncomfortable truth is that technical restrictions have never prevented most people from making convincing films. The ability to maintain the continuity of vision will suddenly create a generation of geniuses of telling stories. However, what might do is to remove adequate friction from the process that more people can experience visual narration without the need for specialized training or expensive equipment. Perhaps the most deep side of the Gen-4 is not what it can create, but what it suggests about our relationship with the visible media to move forward. We enter a era in which the bottle neck does not have a skill or technical budget, but imagination and purpose. In a world anyone can create any image that he can describe, the important question becomes: What is worth displaying? When we enter a afternoon as it requires creating a movie more than one reference and demanding, the most urgent question is not whether AI can create convincing videos, but whether we can find something meaningful to say when there are tools that say anything within our hands. Daily visions about business use cases with VB daily If you want to persuade your boss at work, you have covered VB Daily. We give you the internal journalistic precedence over what companies do with obstetric artificial intelligence, from organizational transformations to practical publishing operations, so that you can share visions of the maximum return on investment. Read our privacy policy Thanks for subscribing. Check more VB newsletters here. An error occurred. 2025-03-31 20:00:00