Generative AI is shaping up to be one of the most divisive technological developments of the last few years. What was once celebrated as revolutionary technology is now scrutinized through increasingly critical lenses. Enter inZOI, the ambitious life simulator from South Korean developer Krafton that promises to redefine player creativity and customization. On paper, it's everything a modern life sim should be—visually stunning, deeply customizable, and powered by cutting-edge technology. Yet this promising title has become an unexpected flashpoint in the ongoing debate about AI ethics.This controversy exemplifies tensions facing not just gaming but creative industries broadly, from music to film to publishing, all grappling with similar questions about AI implementation, ethics, and creator rights.
What is inZOI and how does it use AI?
In inZOI, you control the lives of simulated people known as ‘Zois.’ You control every facet of their lives, from the clothes they wear to the homes they live in. While previous life sims often have a stylized or even cartoony vibe, inZOI promises photorealistic graphics and a high level of customizability. InZOI also includes a number of AI tools. You can have the game create a custom texture based on your prompt or you can take a picture of a real-world object and have inZOI turn it into an in-game item.
Although this use of AI is a novelty in the life sim genre, Krafton is known for enthusiastically embracing AI. They’ve dedicated an entire section of their website to their use of the technology, explaining that “[o]ur goal is to innovate the paradigm of game production, publishing, and operation.” And it’s not just inZOI. Other AI related projects of theirs include “online virtual friends” and “live & interactive virtual influencer.” Krafton is also working hard to develop its own AI tools, from vision and animation technology to language models.
Why is inZOI’s use of AI controversial?
The inclusion of these AI tools hasn’t gone down well with some players. In an article for The Gamer, Tessa Kaur worried that Krafton’s AIs might have been trained on copyrighted material without the original creator’s consent. She also fears that the developers’ use of tools like Midjourney during the creative process is taking work away from human artists. Others have raised concerns about the environmental impact of AI. These concerns have led to calls for a boycott of inZOI.
How Krafton responded to AI ethics concerns
In Krafton’s defense, they’ve been transparent about their use of AI. Their website claims that they “strive to establish a procedure that will allow us to carefully examine potential AI ethics issues (such as hate speech or privacy issues).” To that end, they’ve created a Krafton AI Ethics Committee with a mandate to “facilitate ongoing discussion and debate on AI ethics issues.”
The developers have also taken to Discord to clarify how their AI tools are trained. Community team member ‘Suri’ wrote that “[a]ll AI features within InZOI utilize proprietary models developed by Krafton and are trained using solely company-owned and copyright issue-free assets and data. In addition, inZOI's AI capabilities are built into the client as on-device solutions and therefore do not make communications online with external servers.”
Why the concerns about AI ethics extend beyond gaming
On paper, Krafton has done everything right. Not only have they tried to ensure their tools are developed in an ethical manner, but they’ve also been transparent about their use of AI as a company. However, this isn’t going to be enough for some people. A segment of the population is clearly uncomfortable with generative AI and would prefer to avoid it if at all possible. While some might characterize this as technophobic hysteria, it would be a mistake to dismiss their concerns out of hand.
Krafton’s ethical approach is far from universal. Many tools have been trained on copyrighted material. Their developers have argued that this is justified under the fair-use doctrine, but many creators still feel aggrieved. They worry that the appropriation of their work will ultimately cost them their jobs, and this has led some of them to fight back. Generative AI is also an energy hog that can drive up carbon dioxide emissions. Under the circumstances, it’s not hard to see why some people view generative AI as a line they’re unwilling to cross.
While gaming's AI ethics debate has its unique aspects, Krafton isn't alone in navigating these challenging waters. Other creative industries face remarkably similar ethical questions in their own AI implementations, offering valuable comparative insights for understanding inZOI's reception.
A cost-benefit analysis
Even if you aren’t a game developer, the controversy over inZOI’s use of AI is still worth paying attention to. In a world where generative AI is becoming omnipresent, it can feel like you have to use it. But given how divisive AI has become, any usage of it is going to be controversial. It might make sense to weather the storm if AI is essential to your project. But if you’re just using it like the ornamental sprigs of parsley that were once de rigeur in the restaurant trade, it can be prudent to step back and reconsider your approach and only use AI if the benefits outweigh the potential backlash.
Cross-industry lessons: How publishers navigate similar AI challenges
Like Krafton, creators in other industries are having to navigate the complex ethical challenges posed by AI as they strive to balance technological potential against the rights of creators.
Legal framework and copyright challenges
Recent legal precedents have challenged the industry assumption that training AI on copyrighted materials constitutes fair use. This has direct implications for both game developers and publishers who need to ensure their AI implementations don't infringe existing rights and may explain why Krafton has taken care to clarify that their AI features are "trained using solely company-owned and copyright issue-free assets."
Audience skepticism and trust
Research shows significant audience skepticism toward AI-generated content, with nearly half of Americans saying they don't want news from generative AI. This mirrors the consumer backlash against inZOI, suggesting that creative industries across the board face similar trust challenges when implementing AI technologies.
Human-AI collaboration models
Publishers are exploring "AI-assisted storytelling" where machine-generated content serves as a foundation for human refinement, recognizing that while AI cannot replicate human emotion, it can streamline workflows. This balanced approach resembles Krafton's positioning of inZOI's AI features as tools that enhance rather than replace human creativity.
Implications for content creators
For both publishers and game developers like Krafton, several key considerations emerge:
- Transparency builds trust: Just as Krafton has been open about their AI implementations, publishers should prioritize clear disclosure about AI use in content creation.
- Ethical training data: Following legal precedents like the Thomson Reuters case, content creators must carefully evaluate their training data sources to avoid potential infringement.
- Balanced human-AI roles: The distinction between "using AI as a tool to assist in creation" versus "using AI as a stand-in for human creativity" remains crucial for both industries.
- Anticipate regulation: As existing laws struggle to address digital replicas and AI-generated content, both publishers and game developers should prepare for evolving regulatory frameworks.
The inZOI controversy demonstrates that even well-intentioned AI implementations can generate significant pushback. For content creators across industries, achieving the right balance between innovation and ethical implementation remains essential – not merely to avoid legal complications, but to maintain audience trust and support the broader creative ecosystem.
Conclusion
As we've seen with inZOI, even well-intentioned implementations of AI technology can create controversies that must be carefully navigated. While it’s definitely a good idea to follow Krafton’s lead and be as transparent as possible regarding your use of AI, transparency won’t mollify everyone.
For creators considering AI implementation, Krafton's experience highlights the need for a more nuanced approach. You should assess whether AI truly adds substantive value to your product or service, rather than simply incorporating it because it's cutting-edge technology. If it’s not really adding value, it’s a battle best avoided.