|
Normally on my blog I write about things I enjoy, but today is different as I’m addressing this blog primarily to Nihon Falcom, creators of the Trails and Ys series, and also to other Falcom, RPG, and video game fans who can help spread the message. For the past 15 years, I’ve been a huge fan of Falcom’s works as I’ve written and talked about them extensively whether on my blog, social media, forums (where I’ve made numerous threads), and in person with friends and family. I’ve played the entire Trails and Ys series and have played and picked up other games as well like Tokyo Xanadu and Zwei. Many of their games are some of my all time favorites and have topped my Game of the Year lists with the original Trails in the Sky in particular being my favorite RPG and my third favorite game of all time behind Super Mario 64 and Super Metroid. Falcom games mean the world to me and inspire me in so many ways, so it came as crushing news to hear that Falcom was now actively using Generative AI in their company. On my own end, I want to do everything possible to convince Falcom and all 69 of its employees that work there that this is the wrong decision for the company and that they should course correct immediately by eliminating all Generative AI use at Falcom before too much damage is done to the games they make. As part of this, I decided to write this blog that I’m going to share on social media in the hopes it can get enough attention to ultimately reach Falcom directly. We are all connected on the same internet so I think with the help of others who feel similar this is possible and I hope my message here can transcend the language barriers between us. In prior years, talk of Generative AI at Falcom was limited to its potential use in speeding up localizations, which is also a very large issue that could greatly damage their quality and thus their international reputation and sales. Falcom fans in general are used to high quality translations thanks to the hard work at XSeed, NISA, and fan translation groups like The Geofront, that bring the scripts to life and make them so enjoyable to read. A Generative AI translation that can never understand greater context would inevitably damage their scripts. The issue here today however, which is what sparked this blog, is that at their recent investors meeting on December 18, 2025, president Toshihiro Kondo responded to a question about how their company is actively utilizing Generative AI by sharing it is being used to create image idea boards, for brainstorming ideas, to cut out library research, and to revise their scripts for typos and grammatical errors. While he does say upfront that they will be cautious about using it in regards to their games due to legal concerns, let’s be honest all of this has to do with what is ultimately going to be going into their final games and worryingly it starts at the idea stage which most directly impacts the final script and core game. As I get into this, I first want to discuss the problems with Generative AI in general and my stance on them. When I’m talking about Generative AI, I’m specifically discussing the technology called Large Language Models like ChatGPT and Claude which takes a prompt from a user and presents an output by ostensibly guessing what the answer is one individual word, pixel, etc. at a time. The problems with Large Language Models are numerous and deeply fundamental to how and what they produce. Large Language Models are trained on art and data stolen from other artists and people far more often than not without their consent with the end goal in art in particular to remove jobs from artists and undercut them. The output can never be original as it can only regurgitate old ideas since it just guesses what the next likely word, pixel, etc. is when generating an answer. They thus always lack any and all context that is crucial to understanding how a real person would communicate ideas and facts. Large Language Models are essentially every time they are used a magic trick as they put out something that can look varying degrees of correct and they are programmed to do so with confidence and readily agree with what you ask of them even if you knowingly create a prompt that is false. The creators of Large Language Models promise the world with what they can do, but because they are actually only made to do one thing (take a prompt from a user and create an answer by guessing individual words at a time) the technology cannot and will never do what the creators of LLMs say they can. LLMs do not currently think and will never be able to think, as they are not designed to do that. They are often wrong and “hallucinate” ideas and information that doesn’t exist because they were never designed to actually synthesize and present information, just the illusion of information with confidence. In addition to being highly unethical and destructive to the environment due to the absurd amount of power needed to run LLMs, these models are thus dangerous in the sense that they will lie to you constantly and do so in a way that seems supportive. You don’t have to take my word for it either, pick a subject you are knowledgeable in and ask questions directly about it or tell it something is not true repeatedly even if it is and watch it fall apart in front of you. Why would you want to use a technology that regularly lies to you? What do you think that does to you as both a person and artist? I don’t use Generative AI LLMs like ChatGPT myself, and I never will, but I of course encounter them and their output regularly due to the companies that promote it cramming it wherever they can in an attempt to normalize it and the people interested in sharing what they “made.” Take Google for example which regularly shows you an AI generated answer to many potential searches. Since they introduced them, I’ve seen laughable claims like someone hundreds of years ago named John Backflip was the creator of the backflip and he had a rival named William Frontflip. I’ve seen straight up lies that look like truths as they pretend to cite legitimate sources, but attempting to follow back to the source reveals no mention of what the AI cited. Google Images has become flooded with AI junk too. For example, when you want to look for character fanart, you can spot the AI images right away as they have too many fingers on their hands, the lighting is off, and increasingly common lately are people with multiple heads. Multiple heads incidentally have even extended to “real world” Google AI ads which looks disturbing. Speaking of real world applications, an increasingly common use of AI is to create deepfakes and false reality images and videos which is another association companies and people that use LLM’s are going to be linked to. If there’s a running theme here, AI produces slop, which is embarrassing to look at and be associated with. When it comes down to it in art, I don’t want to read something people couldn’t bother writing and I don’t want to see something people couldn’t bother drawing. In Falcom’s case, you may be thinking well it’s just for a few preparation things, what is the harm? I mentioned it as I introduced the problem, but Falcom is essentially poisoning the well for their games at the idea stage if they are using it for brainstorming and trying to skip research. These are the ideas that will most impact everything that will follow and you cannot come up with original ideas from Generative AI as it can only regurgitate the ideas that are already covered in its unethical datasets. Even if you are just using it as a companion to bounce ideas off of, you picked a technology that will readily agree with you which can make bad ideas that need more work more likely to be considered good enough. If you fudge the research stage, you are shortchanging yourself on actually understanding what you set out to research and you are greatly increasing the chances false and made up information will make its way into your productions. More than anything else, using generative AI is an embarrassing betrayal to yourself, to the art you make, and to your fans. Until now, everything Falcom has made was with real people putting in the hard work to come up with ideas and bringing them to life. When you look at the Trails series specifically, it is an absolutely stunning human achievement as they’ve made thirteen directly and heavily connected games, not to mention the spinoffs, adaptations, and additional media, that juggles so many characters and ideas and creates a world that’s regularly becoming bigger and more fleshed out all the time. Fans who have played the games and have been thorough talking to every NPC and completing all of the quests have almost assuredly spent well over 1,000 hours in Zemuria (over 40+ days of their lives provided they didn’t sleep, eat, etc.!) and have been excited to keep going. Even if Falcom course corrects now, which they absolutely should, suddenly an element of doubt has been introduced into the whole grand saga. Was that odd sentence here and there intentional or did an AI touch up introduce ideas that weren’t supposed to be there? What if a character and their story wasn’t actually authored by the people at Falcom? What if the end of the Trails saga was heavily altered by a robot? We can’t just treat your games anymore as being the product of your hard work, but now with constant skepticism and that just sucks. I’m writing this blog, because I’m genuinely upset and disappointed in Falcom that they ever thought any of this was a good idea and because I genuinely want to encourage them to get back on the right track. Falcom, you’ve never needed Generative AI to accomplish everything you have in the past, and you absolutely do not need it now or ever in the future. Your games are occasionally messy and yet there is this endlessly compelling and inspiring core that drives them. I mentioned it at the start, but Falcom, your games mean so much to me that I’m constantly compelled to share about what makes them so special at any opportunity that makes sense. Even with this current mess you’ve made here, I want people still to play the Trails games you’ve made already because I want them to experience something so incredible and inspiring. I want them to fall in love with your games so they can be mad right alongside me that you ever thought you needed to embarrass yourself using Generative AI to keep making these games. You have made so many wonderful characters, have made a world so rich and vast unlike anything else where characters and ideas constantly grow and evolve, and until now has been done with so much careful thought. I loved reading about for example the thoughts you had into developing Calvard for example as a country of immigrants and wanting people to take what they learned there as how it could apply to the real world we live in. What you’ve accomplished in your games inspires me regularly to write not just blogs, but also stories and characters of my own. When you say you want to take a shortcut here, it’s frustrating because it’s like darn do you even know what you’ve made and what you’ve accomplished so far? Do you not believe in yourselves? I hope if you read this blog, it encourages you to break away from Generative AI on the spot. You’ve inspired so many fans and I’m writing this blog and letter of sorts as one of those fans to hopefully inspire you to believe in yourselves again. It would be so much easier on my end just to say screw Falcom for using Generative AI and stop playing your games on the spot, but I believe you all are worth it and that what you have done and what you can still do matters. Falcom, please stop using Generative AI in your company. Sincerely, Justin Mikos. Comments are closed.
|