About a week ago, ChatGPT rolled out a major update to its image generation and suddenly my feed was flooded with AI-generated images in the style of Studio Ghibli. While the results are impressive (yes, it finally gets hands and text right) but also pretty scary (given how often I draw illustrations). However, they also raise some big questions about authorship, originality, and what we actually mean by “creativity” in a generative age. In this article and podcast, I explore the evolution of generative visuals, share a few surprising examples, and reflect on what all of this means for artists, students, and the future of making.
Listen to the Podcast
If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify.
AI Didn’t Make Art, But It Got the Hands Right This Time
About a week and a half ago, ChatGPT released their newest version of image generation. We are now seeing a flood of Studio Ghibli styled AI-generated images. Notice I didn’t use the word art. About three years ago, I used the term “AI generated art” and my son responded with, “AI doesn’t create art. It generates images. Only humans can create art.”
But I do think it helps to ask, “How is machine learning advancing and changing and what does that mean for creativity?”
We’ll get there soon but first I want to share a few things I have noticed about the advancements.
The first thing you’ll see off the bat is that AI imagery can finally get text correct. That used to be the dead giveway that someone had used AI in their visuals. I asked ChatGPT to generate a book cover concept for my upcoming book and you’ll notice immediately that it got the text correct.
On a side note, I’m still going to draw my own book covers, just as I did with my previous books. I’m also going to create my own visuals and diagrams because I love to draw and it’s part of what makes a book feel like my own work.
The second thing you’ll notice is that photorealistic images are actually very realistic right now. If we look at this image of a painter from before and the current one, it’s a night and day difference.
In the past, most of the images had a glossy, plastic feel. It was like an Instagram filter gone wrong. However, now you’re starting to see things like wrinkles and sweat. It has moved from the Uncanny Valley to photorealism. Compare these before and after images with the prompt “generate an image of a couple on their wedding day.”
Note that it still struggles with bias. Ask it to generate an image of a left handed painter and you’ll get a right-handed painter almost every time (like the one above).
But on the topic of realism, I feel like the eyes and hands are finally correct. For the longest time, image generators like DALL-E had a hard time with eyes and hands. You often had a seemingly normal photo but both hands had six fingers. I imagine if Inigo Montoya had inhabited this machine learning world, he would constantly battled innocent people in his quest to find the man who killed his father. But if you look at the newest iteration of generative AI, the hands and eyes are so much better. Note the point from earlier about the left-handed painter.
The images are also more creative in what I consider to be functional novelty. In other words, things that are somewhat original but also useful.I asked ChatGPT to come up with a concept for a sports team called the Portland Hipsters. Note the creativity of the coffee mug into a beard. There’s also a subtle nod to the Starbucks logo.
I then asked for a 1970s minor league baseball team logo for a team called the Salem Sasquatch and it kind of nailed the vibe / style.
But it can also mimic other artistic styles really well. I asked ChatGPT to take my visual that I had created of my dog, Athena (the first image you’ll see) and then create a similar picture of Zeus (who has a thinner snout). It didn’t get it perfect. However, it felt a little unnerving to see something that felt so . . . me.
As a writer and an illustrator, I have a hard time with this. My style feels deeply personal.
I have some big concerns about these changes. I’m concerned about copyright violations (something I brought up in my book The AI Roadmap). I worry about what this means for artists and designers. I’m also concerned about how this can be weaponized for deep fakes and catfishing.
But I am also curious. I’m intrigued by what all of this means in the long-term for creativity. This is an example within the visual arts but the same thing is happening with audio. And, on a profound level, this is happening in text-based models. As generative AI moves further into deep reasoning, it will impact how we use it for creative problem-solving.
So, with all of this in mind, I’d like to share some ways that I think creativity will evolve in a world of generative AI.
Seven Ways Generative AI Will Change Creativity
Here are a few trends that I have already seen that will likely increase over time.
1. We will need to figure out when to use the Centaur Approach and the Cyborg Approach
I just finished re–reading Ethan Mollick’s book Co-Intelligence and it has me thinking about a metaphor he used at the end of the of the book – the centaurs and cyborgs. A centaur divides the task between human and machine. The human does some parts, the AI does others. It’s collaborative but separate. You might see this in a writer who brainstorms ideas with ChatGPT, then drafts and edits everything themselves. The machine offers structure or inspiration, but the creative decisions stay with the human. I do this centaur approach in my own writing. I outline it and then ask for feedback. I might stop and have a conversation to add clarity on a topic. I use Consensus to explore research but then I write it in my words.
A cyborg, on the other hand, blends the work together in real time. It’s less about switching roles and more about co-creating. A cyborg approach might look like someone generating ideas, asking questions, revising with AI feedback, and shifting approaches midstream in a constant back and forth. I have a friend who describes the writing process for him as being closer to that of a writer’s room for a tv show. He interacts with two characters that he has prompted in AI that he uses to bounce ideas (via ChatGPT) and then he writes in a back and forth draft format with AI-generated text in his own style that he then modifies.
Mollick argues that cyborgs tend to outperform centaurs. Not because they rely more on AI, but because they’ve learned to weave it into their thinking as a kind of extended cognition. They’re not outsourcing the work. They’re enhancing their own abilities by tapping into what AI does well with things like generating ideas, testing variations, and offering structure. In terms of writing, I am definitely taking more of a Cyborg approach in my research phase and in my revision and editing phase, where I have trained a chatbot to play the role of a critic, an editor, and a busy reader.
In this way, AI becomes more like a thought partner than a tool. The most effective users aren’t just skilled at prompting. They’re reflective, iterative, and curious. They stay in control of the process, but they also allow the process to be shaped by the interaction.
For creative work (whether that’s problem-solving, designing a product, or creating visual art) this matters. A centaur approach might help with certain stages: getting unstuck, organizing thoughts, mapping possibilities. It certainly helps us avoid some of the creeping cognitive atrophy that can slip in during the cyborg phase. But the cyborg mindset unlocks something deeper. It’s not about asking AI for answers. It’s about using AI to think differently.
Our students will need to figure out when to use both approaches. They’ll need to toggle between intuition and iteration, between vision and revision. They’ll need to determine where and when they should create the firm boundaries of a centaur. But they’ll also need to find times to adopt a cyborg approach. The goal here is intentionality. Students need to see creative work as dynamic and fluid. AI doesn’t replace the creative process. It becomes part of the rhythm. And we need them to be intentional with that rhythm rather than simply copying and pasting AI work.
2. We will need to embrace a Vintage Innovation mindset.
I recently gave a keynote at a conference and tech director told me, “I actually read The AI Roadmap first but then a group of us did a book study on Vintage Innovation. When I think about using AI ethically, I think the answer has to be a blend of old school and cutting edge.”
When ChatGPT was first released, I warned of the two dangers of Techno-Futurism and the Lock It and Block It approach. Then I shared my own philosophy of finding nuance in a third way of Vintage Innovation:
Vintage Innovation is about honoring the past while pushing the boundaries of today. It’s the embrace of classic methods with new research. It’s that overlap of the tried and true and the never tried. It’s what happens when you use those classic techniques that have stood the test of time but you’re willing to do so with new technology or tools. Some of the most groundbreaking innovations in our world come from old ideas repurposed in new ways (like how engineers used origami principles to design foldable spacecraft parts, or how designers look to nature through biomimicry to solve complex engineering problems). These examples show that innovation isn’t always about inventing something entirely new. Often, it’s about rethinking what we already have. So, with AI, it means being human-centered but tech-infused.
When students cut, paste, build, sketch, and prototype with their own two hands, they engage in creative work at a personal level. It’s often more memorable and it’s distinct. But they might also use AI to explore possibilities, generate feedback, or refine their thinking. A brainstorming session might start with sticky notes and index cards before moving into an AI-powered mind map. A music student might experiment with chord progressions on a real piano before layering ideas with AI-generated loops. A science class might build physical models of systems and then simulate changes using AI tools. In each case, the analog tools slow things down just enough to encourage focus and intentionality, while the digital tools open up new directions that might not have emerged otherwise. That’s the heart of Vintage Innovation. It’s the notion of combining the depth of what’s always worked with the reach of what’s newly possible.
Which leads to the next idea . . .
3. Both the tactile and synchronous will be more important than ever.
Automation has reshaped creative work over the last four to five decades in huge ways. Our tools make it faster and cheaper to create, replicate, and distribute content. On one hand, the gatekeepers are gone and creative workers can put out content that reflect their own style. But it’s also cheaper and that has driven a race to the bottom for certain creative work. This, in turn, has led to a premium on the human elements that can’t be replicated – in particular, anything that is synchronous, in-person, and tactile.
Musicians no longer rely on album sales the way they once did. Instead, they tour, build community, and create live experiences that can’t be automated. A hand-crafted table from a local woodworker might cost ten times more than a mass-produced one, in part because it tells a story. It carries intention. In a world where digital art can be generated in seconds, people are seeking out what feels personal, imperfect, and real. Automation didn’t eliminate creative work. It raised the bar for what we value in it. I can’t prove this but I have a hunch that hand-painted work will carry a premium in the visual arts. So will 3D art and sculpture – especially if it is imperfect.
So let’s think more about another area of creativity – the concept of problem-solving. Spreadsheets and calculators took away the need for tedious number crunching, but they also freed up mental space for deeper analysis and creative thinking. Salespeople no longer spend hours making cold calls from a phone book. They build relationships, tell stories, and host real conversations in person or online. As automation handles more of the repetitive tasks, the human element becomes the differentiator. It’s not just about what we can offload to machines – it’s about what we choose to hold onto.
This has me thinking about school for a moment. If AI can easily generate video and audio content I wonder if students are going to need to become really good at authentic performance. We often hear school districts talking about the need to teach prompt engineering (which is a valid idea) but what about speech and debate? What about theater and improv? What about mock trial? These might not seem cutting edge but they help students learn the critical skill of communicating in front of an audience.
4. AI generated content will stretch creativity and spark innovation.
When the drum machine first came out, critics warned that this would be the end of the live drummer and the studio musician. They had reason for concern. Record companies wanting to maximize profits could easily say, “Let’s not pay a human if a machine can do this instead.” But something else emerged from the process. Electronic music. If we think of rap and hip hop (electronic elements at its best) EDM (electronic at its worst), or that vaguely catchy 1990s hold music, we can see that drum machines sparked new musical innovations. We see it with sampling and reinterpretation. But we also see it with T-Pain (who actually has an amazing voice) using auto-tune the “wrong way” and sparking new artistic innovations.
This isn’t a new phenomenon. Technology has always shaped the arc of art. The invention of mirrors, lenses, and new types of paint allowed artists to achieve incredible levels of realism. Think of the lifelike portraits of the Renaissance or the precise light of the Dutch Masters. But the invention of photography changed things overnight. Suddenly, realism was no longer the artist’s domain. The camera could capture a moment faster and more accurately than any painter could. But instead of ending art, it pushed art in a new direction.
Artists leaned into what photography couldn’t do. Things like emotion, abstraction, and perspective. Impressionism emerged with its blurred edges and fleeting light. The post-Impressionist Van Gogh used bold brush strokes to create an entirely new reality. Over the next century, we experienced the reductionist dada art, wild surrealism, distinct cubism, geometric huminism, 60’s pop art, and postmodern pastiche. Each movement embracing distortion, imagination, or irony. And each coming with the dire warnings of the “death of art.”
The more machines took over realism, the more artists leaned into what only humans could express and pushed the boundaries of the definition of art. I get it. Some people hate this process. Some people love hyper-realistic art. And there’s still a market for that (see the previous point about hand crafting work). But we also need to recognize that AI will push us to create art in new ways.
5. Context and empathy will be even more important in creative work.
AI agents do a fantastic job taking on distinct roles when we give them clear prompts. But they aren’t sentient. They exist in a space that’s . . . Actually, they don’t exist in a space at all. And that’s a challenge for creative work. Because chatbots are doing so in a way that’s predictive they can’t understand the nuances of space and place. In other words, AI can’t read the room. It doesn’t know the local context of your school, your city, your region.
Similarly, you can program chatbots to be pro-social and even pretend to understand how a group feels or what they think. But it takes a human to demonstrate true empathy.
Whether you’re writing a novel, painting a picture, or solving a problem, context and empathy are more important than ever. I’ve written before about how I’ve worked with English and Social Studies teachers to rewrite their writing prompts to be more AI-resistant by focusing on context and empathy. But there’s a bigger trend at work. We want students to develop contextual understanding and empathy as mindsets, habits, and skills for the future.
6. The line between curation and creation will get blurry.
I once listened to a podcast interview with the legendary producer Rick Rubin. He mentioned that he didn’t have a ton of musical talented. He wasn’t the world’s greatest audio engineer, either. His secret talent was his taste. As he puts it, “The confidence that I have in my taste and my ability to express what I feel has proven helpful for artists.” His taste is part of what makes that Johnny Cash cover album so great and the iconic Red Hot Chili Peppers album, Tom Petty album, and my personal favorite the Beastie Boys’ License to Ill album so great.
As AI-generated content grows more sophisticated and creative, our tastes will become much more important. I sometimes wonder if the role of an artist might evolve toward being more like a producer in some respects (or a producer who then heavily modifies the work as an artist). It has me wondering if curation will become more important as a bridge between critical consuming and creating.
So, if we think about curation, it’s this overlap between being a critic and being a fan. A curator pays careful attention to context and larger trends and stories. Curators situate the work in a time and place in a way that then makes it timeless. They might add their own description of spin to the work. Often, curators connect works to one another as they organize information and artifacts.
But that’s also what we want with our students. We want them to have both an excited passion and a nuanced care for what they are learning. We want them to pay attention to context and purpose in the information they consume. We want them to make connections and provide their own lens.
This is more important than ever in a world of AI. I find fascinating that so many people who generate AI images have suddenly started paying attention to art history. And they’ve gotten really intricate in using AI to combine styles and then refine that over and over again. I realize that this might not seem like “true art.” Then again, photographers weren’t considered true artists at first as well. And many “true artists” have teams of other artists who do the work for them (think Jeff Koonz). They’re essentially the producers or creative directors.
7. Being distinct will matter more than ever.
Generative AI uses predictive analytics to create new work. In the past, the process has led to work that skews toward the derivative and cliche. I compared it to vanilla ice cream and mentioned that creators will have an advantage when they can make it their own:
Two years later, I feel more nuanced about this. Generative AI has shown some real potential in divergent thinking. It does well with connecting unrelated ideas and concepts. It’s getting way better at generating “functional novelty,” or works that are pretty original but also useful.
What that means for us is that we will need to think differently than the algorithm. If you are solving problems, you might use AI to solve a problem and then ask, “Is there a different way to solve this? Is there something I know that the AI is missing?”
If you are creating works of art, you might need to be wildly and unabashedly different:
Again, the concept of curation and taste plays a huge role here. You find your voice, in part by the ways in which you critically consume. But it’s also about experimentation. It’s about testing things out in front of an audience. It’s about identity and your self-story. As we think about student writing, voice and originality will play a more significant role if we want their work to stand out in a sea of sameness.
Get the FREE eBook!
Subscribe to my newsletter and get the A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.
Fill out the form below to access the FREE eBook: