In the space of a few short years, generative AI has gone from being a technological curiosity to becoming an ingrained part of our digital lives. But this development is far from uncontested. We’ve seen how Krafton’s decision to include a suite of AI tools in their new life sim inZOI has generated backlash from fans even though they’ve arguably tried to develop these tools in an ethical manner. Given the hostility that AI can provoke, it’s fair to ask, should we use AI at all?
This post isn't about taking sides. Instead, it offers a framework for creators, thinkers, and professionals to determine whether AI has a place in their work—and, if so, how to use it responsibly. From assessing ethical concerns to making informed decisions about when and how to integrate AI, the goal is simple: to approach this powerful tool with clarity and care. Because AI need not be an all-or-nothing proposition; sometimes, the best approach is one of balance.
Why is AI controversial?
Artificial intelligence has sparked intense debate, not just for its capabilities but for the ethical dilemmas it presents. While AI offers efficiency and innovation, its development and deployment raise concerns about fairness, consent, and the integrity of human creativity.
One major point of contention is data usage. AI models rely on vast datasets to learn patterns and generate content, but many of these datasets include copyrighted works, personal information, and proprietary materials—often scraped without explicit permission. This has led to lawsuits from artists, writers, and other creators who argue that AI companies are profiting from their work without compensation or credit.
In response, some artists have turned to data poisoning, a technique that subtly corrupts AI training data to prevent models from accurately replicating their styles. This form of digital resistance highlights the growing tension between technological advancement and intellectual property rights.
Beyond copyright concerns, AI training also introduces hidden risks. The datasets used to teach AI models can contain biases, misinformation, or even security vulnerabilities. If flawed data is incorporated into AI systems, it can lead to unreliable outputs, reinforce harmful stereotypes, or expose sensitive information. Additionally, AI models sometimes memorize data rather than simply learning patterns, which means they can inadvertently reproduce private or confidential details when prompted. This raises serious privacy concerns, especially in industries like healthcare and finance.
AI’s environmental impact is often overlooked, but its resource demands are significant. Training and running AI models require vast amounts of electricity, contributing to carbon emissions and straining power grids. Additionally, AI data centers consume large quantities of water for cooling, which can deplete local water supplies and disrupt ecosystems. The production of specialized hardware for AI also relies on rare earth minerals, leading to environmental degradation.
There *are* use cases for AI
With all these drawbacks, it can be tempting to conclude that AI should be avoided like the plague. But there are undoubtedly tasks for which generative AI is well suited. For example, it excels at tasks that require pattern recognition, content creation, and iterative refinement. This includes:
- Writing and editing: Generative AI can generate blog posts, summaries, translations, and refine text while maintaining stylistic consistency.
- Visual media: It can assist in creating original artwork, design mockups, and photo restoration.
- Coding: AI can draft functional scripts, debug errors, and suggest optimizations efficiently.
- Personalized recommendations: It can enhance playlist curation, search result refinement, and engagement strategies.
- Repurposing content: AI can adapt blog posts, articles, or reports into social media posts, presentations, or formatted documents to maximize accessibility and engagement.
Of course, while AI can do a lot, it’s not a silver bullet. As my colleague Jason Loch discovered, the much-vaunted ‘deep research’ capabilities that many AI tools have rolled out in recent months can struggle when it comes to hyper-specific information. This can cause problems if you’re asking the AI for information on a topic you don’t know much about, as you won’t be able to tell if the AI is relying on outdated information or simply hallucinating.
Getting the most out of AI also requires a degree of technical knowledge on your part. For example, you’ll need to know how to construct the best prompt for the task at hand. You also need to be able to evaluate the AI’s output. Even a high-quality AI that’s performing a task that’s well within its capabilities can still make mistakes. You can’t simply take what it gives you and hit ‘publish’ without a second thought. It’s not a good look if your audience can tell that you’re farming everything out to the AI.
Using AI doesn’t have to mean using AI for *everything*
It’s important to realize that using AI isn’t an all-or-nothing proposition. Even if you don’t use AI for research or writing, you could still use it to optimize post titles and refine content for social media. It’s okay to use AI for some things and not others.
Using AI ethically
Rather than viewing ethical AI use as a binary decision framework, consider these questions as starting points for developing a more nuanced understanding of the AI tools you're using:
- Was the training data sourced responsibly? Did the developers obtain data with consent, proper licensing, and fair compensation where applicable?
- Does the AI respect intellectual property rights? Does it rely on copyrighted works without permission, or does it support ethical content use?
- How transparent are the developers about the AI's limitations? Do they disclose biases, risks, and ethical concerns in its use?
- Does the AI reinforce biases or misinformation? Was its dataset vetted to avoid reinforcing harmful stereotypes or unreliable information?
- How does the AI handle user data and privacy? Does it collect, store, or share personal data responsibly and in compliance with regulations?
- Is there accountability for misuse? Are there safeguards against harmful applications, such as fraud, deepfakes, or misinformation campaigns?
- How does the AI impact human labor and creativity? Does it empower creators and workers, or does it undermine fair compensation and job security?
- Are ethical guidelines or responsible AI policies in place? Do the developers follow industry standards and ethical frameworks for AI development?
- Can users understand and control how AI operates? Are there clear settings, disclosures, and options for limiting AI’s influence in decision-making?
You think that none of this is your concern since you’re just the end-user, but as the old adage goes, you’re known by the company you keep. If you’re using an AI tool that was developed unethically, you’ll be tarred by association.
When it comes to your own use of AI, here are some things to consider:
- Understand how the AI works: Learn about the tool’s capabilities, limitations, and how it generates content to ensure informed use.
- Verify the source of AI-generated content: Ensure that any AI-assisted work doesn’t rely on copyrighted or unethically sourced data.
- Use AI to support, not replace, human creativity: Treat AI as a tool for enhancement rather than a substitute for original thought.
- Disclose AI involvement when appropriate: Be transparent about when AI has been used in content creation, especially in professional or public-facing projects.
- Fact-check AI-generated information: AI can produce errors or misinformation—always cross-check sources for accuracy.
- Respect intellectual property rights: Avoid using AI-generated material that mimics or repurposes copyrighted works without permission.
- Avoid reinforcing bias or harmful narratives: Be mindful of how AI processes language and imagery, and adjust outputs to ensure fairness.
- Consider the environmental impact: Use AI efficiently and avoid unnecessary computations that contribute to excessive energy consumption.
- Advocate for ethical AI development: Support responsible AI practices by engaging in discussions about transparency, fairness, and accountability.
- Use AI selectively and with purpose: Not every task requires AI—determine when its use is truly beneficial rather than relying on it by default.
Conclusion: Finding your AI balance
The generative AI landscape continues to evolve at a dizzying pace, leaving creators, professionals, and organizations to navigate complex ethical terrain. As we've seen, this isn't a simple question of whether to use AI or not—it's about finding your personal or organizational equilibrium in an increasingly AI-integrated world.
The most sustainable approach is one of thoughtful moderation. By identifying where AI genuinely enhances your work without compromising your values, you can harness its benefits while mitigating its drawbacks. Whether you're a creative professional concerned about authenticity, a business leader weighing efficiency against ethical considerations, or simply an individual exploring these tools, the key is intentional use.
Remember that your choices around AI don't exist in isolation. They reflect your values, impact others in your industry, and contribute to broader conversations about technology's role in society. By approaching AI with both critical thinking and an open mind—neither rejecting it outright nor embracing it uncritically—you help shape a future where technology serves human creativity and well-being rather than undermining it.
The question isn't whether AI belongs in our toolkit, but rather how we can use it responsibly, transparently, and in alignment with our deepest values. In doing so, we can ensure that as AI capabilities grow, they grow in directions that enhance rather than diminish what makes our work—and ourselves—uniquely human.
Further information
1. OECD AI Principles and Toolkit
- Link: https://oecd.ai
What it offers: The OECD provides a comprehensive framework for trustworthy AI, including principles, policy observatory tools, and country-specific implementations.
2. UNESCO Recommendation on the Ethics of Artificial Intelligence
- Link: UNESCO Ethics of AI
- What it offers: Recommendations outlining principles to guide ethical AI development and use.
3. Partnership on AI
What it offers: Research, best practices, and collaborative frameworks involving leading companies, academics, and nonprofits.