I was mucking around with some AI tools the other day, showing my youngest daughter how to create some genuinely amazing art with Midjourney, when she asked me something that properly stopped me in my tracks: “Dad, isn’t AI, like, really bad though?”
And there it was. The moment I realized we’ve got a serious perception problem on our hands. Not because AI is perfect (god forbid), but because the loudest voices in the room have managed to completely stitch up the narrative.
How We Got Here
Here’s the thing about AI - the “bad guys” (venture capitalists, tech bros, and corporate overlords) have done such a spectacular job of making it seem utterly horrific that many of the people who should be shaping its future won’t go anywhere near it. And I mean the really good ones - the artists, the activists, the creative technologists who actually give a damn about making things better.
They’ve managed this through what I’m calling the unholy trinity of AI anxiety:
- Environmental impact (“It’s melting the ice caps!”)
- Creator exploitation (“It’s stealing your art!”)
- Bias amplification (“It’s perpetuating systemic discrimination!”)
And you know what? These aren’t entirely wrong. But they’re not the whole story either. It’s a bit odd, because while we’re all busy wringing our hands about these very real concerns, the tech bros are off building their digital death star without any meaningful opposition.
The Actual State of Play
I’ve been futzing with AI tools pretty much daily for the past year, and the reality is way more nuanced than either side wants to admit. Take the environmental impact - yes, training large language models uses a ridiculous amount of energy. But (and this is important), once they’re trained, the actual day-to-day use is surprisingly efficient.
Or to put it another way - your Netflix binge probably has a bigger carbon footprint than my week of AI experimentation. Not that this excuses the initial training cost, but context matters, right?
And then there’s the whole “stealing from artists” debate. Look, I get it. I really really do. But while we’re avoiding these tools on principle, the same corporations we’re protesting against are building proprietary systems anyway. They’re just doing it without any input from the creative community who could actually push for better attribution, compensation models, and ethical guidelines.
The Ironic Twist
The really wicked thing about all this? The people who are most concerned about AI’s negative impacts are exactly the people we need involved in its development. Because right now, what we’ve got is:
- Tech bros building surveillance systems
- Corporate AI teams optimizing for engagement
- Military contractors doing… whatever the hell military contractors do
- VCs throwing money at anything with “AI” in the pitch deck
Meanwhile, the artists, educators, activists, and genuinely thoughtful technologists are staying away because they don’t want to be part of the problem.
What We’re Missing
I’ve seen some properly brilliant applications of AI that never make the headlines:
- Small creative studios using it to prototype ideas faster
- Artists incorporating it into their workflow (while still maintaining their unique voice)
- Educators creating personalized learning materials
- Accessibility tools that would have been impossible without AI
But these stories get drowned out by either doom-and-gloom predictions or hyped-up corporate propaganda about how AI will solve everything from climate change to male pattern baldness.
So What Do We Actually Do?
Here’s my crappy hack of a solution: We need to stop letting the assholes define the narrative. And that means getting involved, even if it feels a bit icky at first.
-
Start experimenting - Even if you’re just mucking around with free tools, get your hands dirty. Understand what these systems can and can’t do.
-
Share your concerns - But do it from a place of engagement rather than pure opposition. “This could be better if…” rather than “This should not exist.”
-
Build the alternatives - If you’re worried about bias, build less biased systems. If you’re concerned about creator rights, develop better attribution models.
-
Support the good actors - There are people and organizations trying to do AI right. Find them, support them, amplify their work.
And most importantly, remember that technology is just a tool. It’s not inherently good or evil - it’s what we do with it that matters.
The Thing About My Daughter…
I ended up telling my daughter that AI is like any other powerful tool - it can be used well or badly, and the best way to make sure it’s used well is to be part of the conversation. Then we spent the afternoon creating some genuinely amazing art together, talking about attribution, and discussing how we could use these tools to make cool stuff while still supporting human artists.
Because that’s the thing about the future - it’s going to happen whether we engage with it or not. And I’d rather have the thoughtful, creative, ethically-minded people involved in shaping it.
Now go forth and make something interesting. Even if it’s just a crappy hack. Especially if it’s just a crappy hack. Because right now, we need all the thoughtful hackers we can get.