by Matthew Russell - Posted seconds ago
Welcome, my CryptoComics Compatriots. My last article was about writing with AI. This one is about art, and this is where things get messier. A whole lot messier.
Writing and art aren't the same beast, even if they both involve AI. People react differently to AI art, and honestly, I totally get why. When you’re talking about visual art, you’re not just talking about information or sentence structure.
You’re talking about style. Craft. Identity. Years of practice. The part of a creator that people can recognize from across the room. If you don’t believe me, check out the differences between Jim Lee’s Batman and Kelley Jones’ Batman. Night and day. Both are amazing, but you can see how each artist has spent decades honing their craft in completely different ways.

So let me make my stance clear right out of the gate. Like the previous article, I’m not against AI in art. I’m against deception and I’m against people using AI as a shortcut, then trying to pass it off like it came out of years of drawing, painting, study, anatomy practice, composition work, and real artistic struggle.
Like the last article, the opinions expressed in this post are that of my own. I do talk about company policy, but the opinions are mine.
This mainly comes down to trust. Our company policy is simple: if you’re using AI in any meaningful way, you need to be forthcoming about it. Not vague. Not slippery. Not hidden in tiny print where nobody will ever see it. Forthcoming.
If a piece was AI-assisted, say so. If AI was used for ideation, reference generation, mockups, lighting exploration, composition options, color testing, or anything else that materially shaped the final result, the audience deserves to know.
Then they can decide for themselves. Don’t worry, they will decide...
That’s how trust works. That’s how respect works. And if money’s involved, that’s how people get to decide with their wallets. That doesn’t mean every use of AI is evil. It means the audience shouldn’t be tricked.
This is where people start splitting into camps. Some compatriots, see what I did there, hear “AI art” and act like every single use is creative theft, artistic collapse, and the end of comics. Other people act like it’s just another tool and anyone who questions it is being dramatic.

I think both extremes miss the point. The real problem, in my opinion, isn’t the tool by itself. The real problem is how it’s being used. A pencil is a tool. Photoshop is a tool. A camera is a tool. 3D models are tools. Reference boards are tools. AI is a tool too. But tools don’t magically remove responsibility.
The question isn’t whether AI touched the process. The question is what role it played, whether the audience is being told, and whether the artist is still actually doing the work that gives the piece its value.
This article is not about the process of generating AI Art or where/how that art was made. That is another article entirely and a completely different topic. I might need to cover that in a future article so be on the look out.
For me, the line is pretty simple: if AI is helping you explore, test, brainstorm, or speed up non-core parts of the process, that’s one conversation. If AI is replacing the hard-earned craft and then being passed off as your own hand-built artistic skill, that’s a very different conversation entirely.
That is a HUGE difference.
Using AI to generate rough ideas, explore camera angles, test costume directions, or kick around visual possibilities isn’t the same as generating a finished piece and presenting it like it came from years of artistic discipline.

Using AI to help visualize an early concept isn’t the same as claiming authorship over labor you didn’t actually do. Using AI as part of a transparent workflow isn’t the same as building your brand on a false impression. That’s the line.
I want to be specific here, because this is where people get nervous.If I use AI in art, I see it fitting into support roles, not throne-snatching roles.
That could include:
brainstorming visual directions
testing mood or lighting ideas
rough concept exploration
exploring costume variations
trying camera angle ideas
generating quick inspiration boards
helping visualize an idea before building it properly

That does NOT mean I’d treat AI output as equal to the years of practice it takes to actually draw well, compose a page, ink cleanly, color with purpose, or build a real visual storytelling voice. Those things still matter to the audience...a lot.
In fact, in a class like Graphic Novel, they matter even more. If students skip the hard part and let AI do the heavy lifting, they’re not learning the craft. They’re dodging it. I’m not interested in teaching students how to dodge the work.
This is where the conversation gets real for me. In a Graphic Novel class, the goal isn’t just to end up with a finished image. The goal is to learn how to think visually, how to construct a figure, how to stage a scene, how to control value, how to guide the reader’s eye, how to tell a story with images, and how to build a page with intent.
If a student jumps straight to AI for the final image, they may get something flashy. What they may not get is understanding. That’s a really big problem.
I care a whole lot more about whether a student is actually learning than whether they can spit out something polished in ten seconds. That said, I do think there are controlled ways AI could be discussed in class.
Not as a replacement for drawing. As a topic, an industry conversation. As a tool to analyze critically. As something students need to understand because it’s already changing the landscape whether artists like it or not. That’s different from handing it the keys.

One of the biggest mistakes people make in this debate is assuming that speed equals creativity. It doesn’t.
Getting an image quickly isn’t the same as building an idea. A prompt isn’t the same thing as draftsmanship. Style imitation isn’t the same thing as developing a voice. And visual polish isn’t the same thing as artistic understanding.
That’s part of why this topic gets so heated. People aren’t just arguing about software. They’re arguing about what counts as authorship, what counts as effort, and what the audience is really paying for when they support an artist. Those aren’t small questions.
This is the hill I keep coming back to. If AI was used in a meaningful way, say so.
Don’t make your audience play detective. Don’t let them assume a fully hand-crafted process if that’s not what happened. Don’t hide behind fuzzy wording like “enhanced,” “assisted,” or “developed with new tools” if what you really mean is that AI generated a major part of the visual outcome.
Be plain about it. The audience can handle the truth. What kills trust isn’t the existence of the tool. It’s the feeling that someone tried to sneak one past them.

Just like with writing, I know not everyone’s going to land in the same place on this. Some people will draw the line earlier than I do. Some later. Some will reject AI art completely. Others will embrace it far more than I ever would.
We’re all trying to figure out what the boundaries should be while the technology keeps moving faster than the discussion.
But for me, the core issue stays the same; Honesty, transparency, accountability. If those stay in place, at least we’re having a real conversation. If they disappear, then all we’re left with is polished confusion and broken trust. It is for that reason that when uploading a new comic or Graphic Novel to the CryptoComics Marketplace, we ask specifically about the use of AI. Answering those questions does not in anyway "block" the comic from publication on our platform, but it does inform the reader.
When it comes to the AI images I use in blog posts, I look at them very differently than I would a commissioned cover, a comic page, or any kind of paid client work. I am not claiming those images were made by a human artist, and I am not trying to pass them off as if they were. I am also not replacing an artist on a paid assignment or taking money out of an artist’s hands for something I should have hired out. Realistically, if I were not generating those blog images, I would probably be using a stock image from a service like Shutterstock anyway. I am not selling the image itself, and I am not creating some fake backstory about where it came from. I am using it as visual support for the article. It helps entertain a little, sets the mood, and breaks up the wall of text so the post is easier and more enjoyable to read.

I’m not interested in pretending AI art doesn’t exist. It does. I’m not interested in acting like every use of it is automatically evil. It isn’t. And I’m definitely not interested in telling artists, students, or readers to ignore what’s happening right in front of them.
But I am interested in drawing a hard line around honesty.
If AI is part of the process, be upfront about it. If it shaped the work, say so. If the audience is paying for the result, give them the respect of telling them what they’re actually buying.
That’s not anti-tech. That’s pro-trust. And in art, trust still matters more than people think.
