Krafton's "copyright-free" AI claim falls apart

250821 InZOI and AI revised

South Korean game developer Krafton promised ethical AI in their life simulator inZOI, but players discovered their AI tools were built on controversially-trained models. The backlash offers crucial lessons for any creator considering generative AI implementation.

Table of Contents

In Politico’s series on the impact of AI on democracy, one of the pieces looked at how tech companies are working hard to make you believe that they’re doing the right thing when it comes to generative AI. But industry spin and an avalanche of responsible-sounding buzzwords can mask a more complicated reality. Consider inZOI, an ambitious life simulator from South Korean developer Krafton that promises to redefine player creativity and customization. This once-promising title became an unexpected flashpoint in the ongoing debate about AI ethics due to its inclusion of tools based on generative AI. While Krafton attempted to quell customer discontent by emphasizing their ethical approach to AI, things started to unravel once players started poking around under the hood.

Companies should reconsider if they're using AI "like the ornamental sprigs of parsley that were once de rigeur in the restaurant trade"

What is inZOI and how does it use AI?

In inZOI, you control the lives of simulated people known as ‘Zois.’ You control every facet of their lives, from the clothes they wear to the homes they live in. While previous life sims often have a stylized or even cartoony vibe, inZOI promises photorealistic graphics and a high level of customizability. InZOI also includes a number of AI tools. You can have the game create a custom texture based on your prompt or you can take a picture of a real-world object and have inZOI turn it into an in-game item.

Although this use of AI is a novelty in the life sim genre, Krafton is known for enthusiastically embracing AI. They’ve dedicated an entire section of their website to their use of the technology, explaining that “[o]ur goal is to innovate the paradigm of game production, publishing, and operation.” And it’s not just inZOI. Other AI related projects of theirs include “online virtual friends” and “live & interactive virtual influencer.” Krafton is also working hard to develop its own AI tools, from vision and animation technology to language models. 

95% of generative AI pilots are failing" according to an MIT study

Why is inZOI’s use of AI controversial?

The inclusion of these AI tools hasn’t gone down well with some players. In an article for The Gamer, Tessa Kaur worried that Krafton’s AIs might have been trained on copyrighted material without the original creator’s consent. She also fears that the developers’ use of tools like Midjourney during the creative process is taking work away from human artists. Others have raised concerns about the environmental impact of AI. These concerns have led to calls for a boycott of inZOI. 

How Krafton responded to AI ethics concerns 

In Krafton’s defense, they’ve been transparent about their use of AI. Their website claims that they “strive to establish a procedure that will allow us to carefully examine potential AI ethics issues (such as hate speech or privacy issues).” To that end, they’ve created a Krafton AI Ethics Committee with a mandate to “facilitate ongoing discussion and debate on AI ethics issues.” 

The developers took to Discord to clarify how their AI tools are trained. Community team member ‘Suri’ wrote that “[a]ll AI features within InZOI utilize proprietary models developed by Krafton and are trained using solely company-owned and copyright issue-free assets and data. In addition, inZOI's AI capabilities are built into the client as on-device solutions and therefore do not make communications online with external servers.”

The other shoe drops

Once inZOI was on players’ computers, they could examine the game files. A Redditor called ryakr has claimed that one of the game’s AI tools is essentially powered by a finetuned version of Stable Diffusion. “Looking at both files in a hexeditor literally shows the same gating and neuron layout just with different weights,” they said. So far, Krafton does not appear to have responded to these revelations.

Even if Krafton only used ethically sourced assets to train their version of Stable Diffusion, that is not true for the core model. It was trained on the LAION dataset which includes 5 billion images scraped from the internet without the consent (or even knowledge) of the creators which is why the company behind Stable Diffusion, Stability AI, and others are facing a class-action lawsuit from artists aggrieved by the use of their work to train these models. 

Although Krafton’s claims may technically be true, this approach will strike many people as an exercise in hairsplitting reminiscent of Bill Clinton quibbling over the meaning of the word "is." The fact that AI itself is a black box whose inner workings are inherently inscrutable only muddies the waters further. At any rate, Krafton doesn’t need more bad publicity. Although inZOI is currently in early access, it’s being portrayed as something of a flop. While the game’s woes go beyond the use of generative AI, this controversy certainly hasn’t helped it, either. 

The fact that AI itself is a black box whose inner workings are inherently inscrutable only muddies the waters further"

Why the concerns about AI ethics extend beyond gaming

A cost-benefit analysis

Even if you aren’t a game developer, the controversy over inZOI’s use of AI is still worth paying attention to. In a world where generative AI is becoming omnipresent, it can feel like you have to use it. But given how divisive AI has become, any usage of it is going to be controversial. It might make sense to weather the storm if AI is essential to your project. But if you’re just using it like the ornamental sprigs of parsley that were once de rigeur in the restaurant trade, it can be prudent to step back and reconsider your approach and only use AI if the benefits outweigh the potential backlash

That doesn’t mean you have to avoid it like the plague, though. For example, the Washington Post has developed a tool called ‘Bandito’ that helps them automate A/B testing, but the different options are still prepared by humans. Human oversight should be at the heart of any AI strategy. 

The fact that AI itself is a black box whose inner workings are inherently inscrutable only muddies the waters further

Cross-industry lessons: How publishers navigate similar AI challenges

Like Krafton, creators in other industries are having to navigate the complex ethical challenges posed by AI as they strive to balance technological potential against the rights of creators.

Legal framework and copyright challenges

Recent legal precedents have challenged the industry assumption that training AI on copyrighted materials constitutes fair use. Similarly, Anthropic is facing a class-action lawsuit from authors over its use of pirated books to train their AI model Claude. Cases like these have direct implications for both game developers and publishers who need to ensure their AI implementations don't infringe existing rights. Even if you tweak an existing model and train it on ethically sourced material, you can still find yourself in hot water if the underlying model was trained in morally questionable ways.

Audience skepticism and trust

Research shows significant audience skepticism toward AI-generated content, with nearly half of Americans saying they don't want news from generative AI. This mirrors the consumer backlash against inZOI, suggesting that creative industries across the board face similar trust challenges when implementing AI technologies. Corporate hairsplitting doesn’t help. Even the business world may be souring on AI. Open AI’s Sam Altman recently made headlines when he warned that AI could be in a bubble while a study from MIT suggested that 95% of generative AI pilots are failing

Human-AI collaboration models

Publishers are exploring "AI-assisted storytelling" where machine-generated content serves as a foundation for human refinement, recognizing that while AI cannot replicate human emotion, it can streamline workflows

Implications for content creators

For both publishers and game developers like Krafton, several key considerations emerge:

  1. Transparency builds trust: Publishers should prioritize clear disclosure about AI use in content creation. But if you try to obfuscate, be prepared for a backlash.
  2. Ethical training data: Following legal precedents like the Thomson Reuters case, content creators must carefully evaluate their training data sources to avoid potential infringement.
  3. Balanced human-AI roles: The distinction between "using AI as a tool to assist in creation" versus "using AI as a stand-in for human creativity" remains crucial for both industries.
  4. Anticipate regulation: As existing laws struggle to address digital replicas and AI-generated content, both publishers and game developers should prepare for evolving regulatory frameworks.

The inZOI controversy demonstrates how generative AI is becoming a cultural flashpoint. For content creators across industries, ethical implementation remains essential. It’s not just about staving off lawsuits or boycotts; it’s about maintaining audience trust and supporting the broader creative ecosystem.

If it's not adding genuine value, don't even bother"

Conclusion

As we've seen with inZOI, even well-intentioned implementations of AI technology can create controversies that must be carefully navigated. Krafton’s attempt to win kudos by stressing their ethical use of the technology has now been undermined by the revelation that the situation may be far more complicated than they let on. 

For creators considering AI implementation, Krafton's experience highlights the need for a more nuanced approach. You should assess whether AI truly adds substantive value to your product or service, rather than simply incorporating it because it's cutting-edge technology. If it’s not adding genuine value, don’t even bother. 

Illustration of colorful books on a shelf against a dark background.