The AI wave reached anime shores a long time ago – will anime ever be the same again? Join Chris and Nicky this week in their discourse about anime and AI.
Disclaimer:The views and opinions expressed by the participants in this chatlog are not the views of Anime News Network. Spoiler Warning for discussion of the series ahead.
Correction: This column previously included a clip of Discotek‘s release of Street Fighter II with the discussion of AstroRes, inferring the company used the AstroRes process for the film. Street Fighter II is a new 4K scan and is not an upscale, nor was AI used for Discotek‘s release of the Street Fighter II anime. Similarly, while Discotek has experimented with turning HD to UHD, the company determined it was not worth the effort and has not released any commercial product or clips using this method.
Nicky, I know a lot of people these days have consternations over their jobs being taken over by machines. It used to be the subject of speculative fiction, but now you can’t go a few virtual feet without running into artists or writers working with legitimate concerns about their roles being farmed out to so-called AI. Thankfully, we’ve perfected the fine art of posting anime screencaps and making stupid jokes, something no machine could replicate, no matter how sophisticated.
It makes you wonder how it’s going for everyone else out there, though.
I know that no AI could ever be as “plugged-in” as we are, but the rate at which Machine Learning Models have been catching on is quite harrowing. While these models have been in development for years, this has been the year that every executive and their grandmother has been rushing to adopt technological assistants into their businesses and homes. I can’t go anywhere without seeing ChatGPT this or Midjourney that! To some, this is a jump towards a more technologically advanced future, but to me, as an artist and writer, the rapid pace at which AI is being incorporated into our everyday lives feels like a dystopic invasion.
One of the biggest issues to come out of AI is more than just their output; it’s their input. Many of the most prominent AI rely on vast amounts of training data, with many of the most significant sets involving text and images scraped from social media and other places on the internet where many professionals and hobbyists post their content, which means that much of the data used to create AI has been stolen from users without their permission.
AI has been used to assist intentional art theft too, with some people taking artist’s existing art or sketches and feeding it to AI, such as what happened to one Korean-language Genshin Impact artist (@Haruno_intro) when someone posted an Ai-generated version of their Raiden Shogun piece to social media before the final work was completed in an attempt to claim that the AI was the “original”. Below is the AI (left) vs. the completed art (right).
So now we also have an online artist who is afraid to post their sketches and stream their work.
For reasons like this, you’ve seen a rise in programs that artists can use to counter and prevent their work from being scraped for datasets, with Glaze seeming to be the most preeminent.
One of those cases where it’s neat that people were able to rise to the challenge of coming up with solutions for this sort of thing, but also frustrating that we wound up in such a situation where it was even necessary.
However, only some are on the side of online artists, with users and even larger entities, such as individual social media sites and art hubs, having polarized takes on whether AI art should be allowed. Deviant Art welcomed our machine overlords with open arms, much to the dismay of more than a few artists. A select few are going all in on the Anderson vs Stability AI Ltd lawsuit claiming copyright infringement. They’ve also created a blog detailing their case.
Compared to places like DevArt, it seemed like Japanese sites focused on user-uploaded creations, including pixiv and DLsite, were coming down on banning most machine-generated material. These bans were instituted in May. However, they were labeled “temporary” then and don’t seem to have stuck. As anyone on those sites regularly can tell you, their listings continue to be lousy with AI-generated pictures and image sets, with their “creators” making money off selling them.
It’s incredibly frustrating, as someone who enjoys browsing art as an expression of the people who made it, to see it being supplanted by material that was created simply by skimming off their work and put together without intent to express anything other than a description of whatever’s “Trending on ArtStation.”
And of course, there’s nothing at the moment that protects the thousands of images from overseas artists that have already been scraped and fed and then packaged as a subscription model to automatically produce “anime” style artwork when there has always been a prominently undealt with the problem of reposting people’s art without permission!
The time-honored artist’s request not to repost. So many of us can respect that small favor, but too many others apparently could use the sign tapped a few more times.
The sheer ease of monetizing this model means that AI-generated content would always be attractive to these kinds of hacks and hucksters. But what does it mean when more theoretically above-board, professional institutions see the potential for profit borne of lessening that pesky, costly human input, and start experimenting with this technology in things like our beloved anime medium?
Hmm, it’s pretty complicated! So, while those online and in the hobby world see AI as a huge detriment, companies, and corporations, including overworked and understaffed anime studios, see AI as a worthy “investment.” Wit Studio produced The Dog and the Boy short with “Netflix‘s Creator’s base” and worked closely with the AI training schools to produce the three-minute short’s background art.
It’s not a bad short, a story between a robot dog and his human, but the credits note that there were also humans doing layout and touch-up, and their only credit that exists is +Human.
It comes off only a little glib to see them throw up “hand-drawn” layouts as part of creating these backgrounds while neglecting to credit whose hands those were.
Also, the song used in the short has vocals created using a Vocaloid-style sound bank because they were going for a whole theme here.
The article above notes many other emerging AI techs, such as automatic color and in-betweens. Technically, The Dog and The Boy isn’t even the first anime to use AI. This brings up two more problems, which are A) The term “AI” is very broad and can also be referred to a range of automated tech that might’ve already existed, and B) It makes it difficult for laymen to understand what AI tech does and C) Is it worth it to pour all this money into the technology compared to just paying or teaching artists to do the same thing without the fuss?
The beauty of something like the synthetic diva Hatsune Miku, is the clear effort by the producers, like my guy DECO*27, pour into tuning her songs. (Not to mention the wonderful animated music videos accompanying many of them.) “Blue Planet” was released at the end of August to celebrate the virtual idol’s 16th birthday.
It was interesting to see the visceral reaction to The Dog & The Boy when it debuted, likely due to the growing public knowledge of this kind of technology and how companies like Netflix might leverage it long-term. Comparatively, just two years earlier, Studio Orange used AI-generated backgrounds for the OP of their second season of BEASTARS, and the reaction was mostly a shrugging “Yeah, that’s kinda neat.”
Of course, that’s the point. In 2021, AI and ML models were just seen as interesting artist’s toys that could generate stuff to be incorporated by more “traditional” creators or to produce vaguely funny “We forced a computer to watch 1000 hours of The Office and generate us a script” articles on entertainment sites.
Yet, it’s less novel when something like the latest Marvel miniseries Secret Invasion releases its AI-generated TV opening amid strikes. It’s not funny if you’ve followed the struggle of animators, writers, and VFX artists fighting against studios to make a decent living. It’s harrowing once you’ve learned how the world would love nothing more than to treat creators as disposable machines. Many now see AI as a means to get in the way of human power, myself included.
The people at the top hoping to use AI as a substitute for human efforts are running into negative public perception. The other problem is AI churns out a poor product. Earlier this year, Atlus reissued anime-affiliated RPG Persona 3 Portable on modern consoles, with many of the game’s graphical elements farmed out to a company that used free AI upscaling software to bring them up to HD. The results, especially on the backgrounds, were…less than stellar.
My other issue is that many proclaim an AI future before it’s time…They overestimate what it can do. Machine Learning models still struggle with consistency; even some “intelligence” models may experience biases, illusions, or drifting. This goes back to what I meant before: AI is an inclusive term, with most people’s only reference being science fiction that anthropomorphizes machines as sentient (including by far the best episodes of Magia Record).
It’s like those bros who will interact with ChatGPT for a few minutes and start panicking that the fancy autocomplete is “becoming self-aware.” It’s that kind of gun-jumping that saw Square Enix, who previously went in on earlier tech-speculation sure-thing NFTs, attempt a remake of the formative early visual novel The Portopia Serial Murder Case as an “AI Tech Demo.” Said AI technology was consigned to a pretty reasonable use-case, with a conversation simulation system that was supposed to let you navigate a text-based adventure game with less rigid inputs. Except this didn’t even work, and the Portopia remake wound up even more obtuse to interact with than Wizardry.
AI is, of course, only a tool, so it’s not evil in itself, and I can’t say it doesn’t have the potential to be helpful or good. However, it depends on a lot of factors. Like the model, the method, and the usage. Each instance is on a case-by-case basis. However, even with all that, there’s still so much work to be done before the adoption of AI could be considered “efficient” with how little control there is over some of its output, compared to “dumber” more traditional tools. When I see AI being used, my base reaction is to think that the company wanted to cut corners.
That’s a reasonable assumption a lot of people make. There’s also the question of exploitation of the creators involved by the higher-ups using this technology. One of the big sticking points of the ongoing SAG-AFTRA strike is studios trying to get performers to sign away their rights to have their images and performances scraped for possible future synthesis. It’s a ghoulish concept used as a punch-line on Bojack Horseman. But it’s also the sort of thing that’s already an ongoing issue, with amateur AI tech users scraping performances of anime voice actors like Erica Lindbeck and Richard Epcar to be used without their consent in unauthorized mods, song covers, and god knows what else.
However, if I wanted to give a way AI could be beneficial. Following our discussion on Blu-rays, preservation, and restoration, retro fan’s favorite boutique licensor Discotek utilizes AstroRes to digitally clean up analog video damage and blurriness and upscale SD video to a modern HD signal. This is useful since digital animation is challenging to produce in any resolution higher than 1080p due to how massive the file sizes get.
It’s a far cry from that Persona 3 upres job, that’s for sure. And it is an excellent example of how people with a real passion for art can use this kind of technology.
This particularly favors digipaint and sources where the original materials become lost or unavailable. However, Discotek has noted the results aren’t as good as scans of the original film in the case of traditional cel works such as Project A-ko.
There’s always that measure of uncertainty when companies talk about using this kind of tech. Going back to what I mentioned about synthesizing actor performances, Bushiroad recently had my beloved BanG Dream! franchise collaborate with CeVIO AI, a singing synthesizer company, to produce voice banks based on a couple of their performers.
I snarked earlier about The Dog & The Boy‘s use of one, but I think banks like Vocaloid are nifty tools musicians can use to produce amazing work. The question is if these CeVIO programs are traditional voice banks contracted and recorded by their performers and use AI and Machine Learning to smooth out the delivery a bit or if they were scraped and synthesized from prior work by the actresses. How much input did they have on their performances being used that way? It’s unclear from CeVIO and Bushiroad‘s buzzwordy statements, but I hate to think that future performances of Kasumi might not be AIMI but a soulless simulacrum of her.
It depends. In the case of synthesized voicebanks like Vocaloids and UTAU, they predate learning models, and there HAVE been cases of using real singers as a basis with permission or someone using their voice as scratch before launching their real singing careers. Similar to diffusion models, there are cases where real people might use their images and use diffusion as a sort of filter, almost like traditional rotoscoping. See the process behind the viral “Anime Rock, Paper, Scissors.”
Oh lord, that thing. That’s earned the record for the most quotation marks needed to be used around something labeled “Anime.”
However, they also say they heavily sample images from the movie Vampire Hunter D: Bloodlust, which feels a little more dubious regarding copyright. This leads to our final question: Is AI art even copyright-compliant or copyrightable?
That would be the ultimate query, given how many companies want to use it to output their material! Beyond the apparent technical limitations and just plain ethical issues, generated art that couldn’t even be properly packaged and sold by these executives might synthesize the nail in its coffin.
Aw dang, would you look at that:
This is based on U.S. law, by the way, not a Japanese one. This policy was used to respond to a case where a copyright for an image was filed with the only credit given to what scientist Dr. Stephen Thaler calls “The Creativity Machine,” stating that humans must be involved if AI work is to be copyrighted, just like how you couldn’t technically copyright an image solely produced by your cat or dog.
And I’m sure plenty of those outputting AI art will contest these definitions, arguing that their typing in some prompts is tantamount to enough human involvement in the artistic process. But it is a start. Notably, on Japan’s side of things, the Agency for Cultural Affairs did state back in May that AI-generated art could be held liable for copyright infringement.
However, the Japanese government would later reverse that decision a month later to be more in line with the European Union. Governments around the globe are pondering what to do regarding AI, and needing to be in step would hamper future collaboration. The EU guidelines do list that that companies must disclose copyrighted materials used in training data, though. While disappointing, there’s hope that using rules and regulations could forge a more ethical AI future.
The fact that Japan’s ACA could do an about-face on things quickly speaks to what a moving target things are in this formative new future we find ourselves in. Our jobs as funny internet commentators may be ones that an AI couldn’t do, but it’s also a position where we don’t have all the answers. The anime we watch, like Carole and Tuesday and Vivy: Fluorite Eye’s Song, might muse fantastically on the various possibilities of AI-created art in the future, but those stories are the results of actual human creators and performers. Just like with the data this AI content is initially trained on, you can’t get anywhere without a person at the start of it all.
Even with various malicious AI usage, it’s still the result of people choosing to act against other human beings. A tool should never come before the rights, livelihood, and well-being of living and breathing people. What I wouldn’t dream of is all the money put into pushing us into an “AI future” was instead towards human welfare. When people say they are anti-AI, they’re not anti-technology or convenience; they’re promoting solidarity and respect towards human hands.
That’s the point I always return to when engaging with these subjects. I’ve seen animators anecdotally speak well of programs like Dwango‘s anime AI assistant that can streamline the process of in-between frames. And I have to admit it’s a bit chuckle-worthy to see Eiichiro Oda play around with ChatGPT and ask for its input on the next One Piece chapter, even as he didn’t seem too impressed with it.
There is potential value in these tools, but even in the most hopeful speculative sci-fi anime, the resolution usually comes down to us working with the AIs, not having them wholly replace us.