But the constant and surprising amount of new applications for AI gives the impression that the future of entertainment is difficult to predict. My general opinion of AI-generated “photographs” has been that they’re like early CGI in movies: impressive to the naked eye, but only because we haven’t yet learned the telltale signs to look for. However, each new version of deep learning models like Midjourney produces images with more natural-looking people and a more believable environment, even if (for now) there’s still a general Lynchi vibe and regular horrible finger bugs. and teeth, or objects. that float or collide with each other incorrectly.
The new version of the OpenAI language model has only been available for a week, and already a user has discovered that it can read and interpret the source code of a video game and repackage it as some sort of choose-your-own-adventure novel. Who’s to say that soon you won’t be able to create your own games from scratch based on requests?
The voice models of the most prominent American celebrities are so easily accessible that creators only need to provide a written script to have audio content of them saying anything. Specifically, for some reason, synthetic recordings of US President Joe Biden and former Presidents Barack Obama and Donald Trump insulting each other and ranking everything from marvel characters to super mario games.
This month, US-based web video production company Corridor made headlines with Anime Rock Paper Scissors, a short film he created with rotoscoping; a technique that uses video as the basis for animation frames. It has been implemented for decades in cartoons (Max Fleischer invented the technique)Films (a dark scanner) and video games (Prince of Persia).
The difference is that Corridor used Stable Diffusion, a popular text-to-image model, instead of human illustrators for the task. She said that she trained the model on stills from the anime film. Vhunter amp d: bloodlust, resulting in a finished product that retains the movements and actions of the actors but appears colorful and animated. He also claims to be the first to have created a movie that way.
By anime standards, the video is frankly pretty bad, with the characters’ pupils, hair, and shadows flickering and disappearing in a way that’s characteristic of computer vision, but one a human animator would never opt for as a choice. stylistics. The human movement in the film is also a far cry from the limited expressive animation of most anime, giving the whole thing an eerie valley vibe. Although once again, the machines will only get better.
More interesting than the animation itself is the process behind the video, which Corridor explains in detail in an hour-long article, and his claims about what the technique means for the future. As expected, his claims that “just changed animation forever”, and that his technique could democratize an industry that has traditionally relied on highly-skilled artists, did not sit well with many animation commentators and fans.
“Not only is this a terrible, terrible idea, but it actually hurts my eyes to look at it,” wrote one detractor, with another saying, “you guys are just lazy crooks spitting on a whole art form.” Others were excited about the technology’s potential to create hyper-personalized content in any style.
Taylor Blackburn of comparison site Finder said these developments point to a future of creative content with faster production times and lower costs.
“Even if it just lets you automate a repetitive task like resizing images or transcribing audio, doing it in seconds instead of minutes can make a world of difference when working with a timeline,” he said.
“One of the strengths of AI is its ability to learn and adapt to new input, allowing it to create unique and personalized content that is tailored to individual preferences.”
It is easy to speculate on possible future implications. Perhaps Netflix or a competitor could create a much cheaper streaming service filled with AI knockoffs of popular shows. Or maybe the AI generation will be like CGI and digital art are today; most products use it, but there is still a market for hand-painted portraits or traditional works like Guillermo Del Toro’s. pinocchio. Or, perhaps regulators will come to view some AI capabilities as more akin to plagiarism than generation, and limit their use.
But in the here and now, the discussion between creators who want access to more powerful tools and consumers who resent that the work of dozens of experts over the years is poorly replicated in a day highlights a key challenge for entertainment. AI generated.
While many advocates claim that an AI-enhanced future for content creation will allow artists to focus on the “what” and “why” while leaving the “how” to machines, the truth is that in many ways art, the “how” is an instrument. part of the appeal.
To use the AI rotoscoping anime as an example, the idea was to present the filmed footage in a way that stylistically resembled Japanese animation. But while the AI process more or less achieved this, the end result lacks many of the features that come with authentic anime production.
In most anime, including vampire hunter D, characters expressively shapeshift, are rendered in entirely different styles depending on the situation, or have different animation speeds that add texture to the story. Employing these techniques correctly would require an AI model to know not only what the anime looks like, but also why it looks the way it does.
And you see the same tension across the spectrum of generative models for speech, text, images, and sound. These models feed on the results of human creativity and skill, and they are becoming adept at and replicating those results. But the jury is out on whether they could ever replicate the thought processes, theories, skills, and imagination.
Get news and reviews on tech, gadgets and games in our tech newsletter every Friday. Sign up here.