Writing With AI, Not By AI

by Matthew Russell - Posted seconds ago



Welcome, my CryptoComics Compatriots. We need to talk about something that has been popping up more and more in classrooms, work meetings, creative spaces, and just about every corner of the internet: AI.

I am not against AI. Far from it. I think it is one of the most powerful tools we have seen in a long time. (Tool being the operative word here) As many of you know, it can help people organize ideas, tighten up rough drafts (read on to see how I did that with this post), spot weak points in an argument, and even suggest angles that might not have been obvious at first glance. 


Used well, it can save time and sharpen the work. Used poorly, it can also flood the world with highly polished nonsense. That is where the real conversation begins.

This Is Not New for Me

Writing has been part of my life for a long time. Back in high school, I was not only writing for the school newspaper, but also for the local newspaper. Over the years, I have written poetry, articles, scripts, and comics. I have helped clean up other people’s novels, scripts, and comic work too. Some of my work has been published. A lot of it has not, mostlybecause time has a funny way of getting in the ring and body-slamming your plans.

My point is simple: I am not turning to AI because I do not know how to write. I am turning to AI because I do know how much work good writing takes.


Good writing is not just typing words onto a screen. It is research. It is structure. It is voice. It is revision. It is checking facts when something sounds right but may not actually be right. It is asking whether a sentence is clear, whether a point is fair, and whether the final piece is actually worth someone’s time.

That is why I think the conversation around AI often misses the mark. Too many people act like there are only two sides. Either you reject it entirely, or you let it take the wheel. I do not think that is the choice at all.

This is also NOT a brand statement from CryptoComics Marketplace or anyone connected to it. This is my opinion alone. We've had several meetings over the years about this exact topic, and even in the office there has been a divide on it. 

Some people are more open to AI, some are more skeptical, and some are still figuring out where they land. That is part of why I wanted to write this in the first place.

Where I Stand on AI

I do not believe AI is the enemy. Carelessness is as well as deception. That is the line for me.

The issue is not whether someone used AI at some point in the writing process. The issue is whether a real person stayed accountable for the final result. 

Did they do the research? Did they verify the claims? Did they think through the argument? Did they bring any real experience, judgment, or perspective to the piece? Or did they just hit generate, copy, paste, and call it a day? It matters far more than most blog writers think.


Because readers are not just looking for words. They are looking for trust. They want to know that the person behind the article actually cares whether the information is accurate. They want to know they are reading something shaped by real thought, not just a machine spitting out the most statistically likely paragraph.

I think if AI plays a meaningful role in the work, the audience should know. Not because AI use is automatically bad, but because honesty matters. Transparency matters. Trust matters. So does accountability.

I think of AI as nothing more than a hammer. It is a tool. You could either build with it, ot destroy with it. I heard that somewhere once about some other tool (I think it was a Thor movie) and it seems to just apply so well to AI that I had to add it here.

How I Actually Use AI in My Blog Process

When I am working on a blog post, AI is not the starting brain and it is not the final authority. I do the research first.


That means I am reading articles, looking through source material, comparing perspectives, and trying to understand the topic before I ever start shaping the post. Once I have that foundation, I put together a draft. 

Sometimes it is rough. Sometimes it is messy. Sometimes it looks like a stack of half-built Lego pieces scattered across the table. Thats a very normal thing for me. If you could see inside my brain, you would be shocked at the chotic nature of messy half written articles and images of superheros flying around.

Anyway from there, I typically use ChatGPT as a conversation partner. I will ask it to help me look at the draft from different angles. I may ask where the weak spots are, what objections a reader might have, or whether there are related ideas I haven't explored yet. 

Sometimes it helps me see blind spots. Sometimes it pushes me toward better questions. Sometimes it suggests articles, papers, or directions for further reading that I can go explore for myself.


Thats important, because I don't just accept those suggestions and move on. I DO the extra research. I read more. I compare. I verify. Turns out, that is also the fun part. Then I go back into draft mode and revise whats missing.

After that, I may use AI to help rewrite or polish sections so the article flows better, sounds cleaner, and reads more naturally. That is not the same thing as handing over authorship. It is editing support. It is cleanup. It is refinement. 

I once had an editor compare me to a kindergartner for my gammer and use of punctuation. It turns out that this is importand. 

Look at the phrase “Eat your dinner.” or “Eat, your dinner.” 

Hyphens matter too. “Twenty five-dollar bills” or “Twenty-five dollar bills” One equals $100 and the other equals $25. Which one do you want?

Anyway, back to the part I refuse to skip: I go back through the article and double-check every claim, every fact, every reference, and every point that could be wrong. If something feels shaky, I look it up. If something sounds too confident, I verify it. If something does not hold up, it does not stay in the article.

That is the process. AI helps. I decide.

The Difference Between Help and Replacement

This is where I think a lot of people get stuck. There is a big difference between using AI as a tool and using it as a replacement for thinking. (Trust me I work at a High School and I see so many reports where the students refused to even think a little and used AI.)

Using AI as a tool can look like brainstorming, outlining, editing, restructuring, or pressure-testing ideas. It can help take a decent draft and make it stronger. It can help surface angles that are generally easy to miss. It can help clean up clunky wording so the message lands better.

Using AI as a replacement looks very different. That is when someone throws in a prompt, gets back a full article, does little or no verification, and pushes it out into the world because the box has been checked and the content calendar has been fed.


That is not writing. That is content farming with better software and that is exactly where trust goes to die.

Why This Matters to Me

Part of the reason I wanted to write this is because this question keeps coming up. I have heard it from students in class. I have heard it in work meetings (as previously mentioned). I have heard versions of it in education, creative work, and business spaces alike.

Where do we draw the line? At what point does technology help creativity, and at what point does it start replacing the very thing people came for in the first place?

For me, the line is not drawn at the existence of the tool. The line is drawn at responsibility.

If you're using AI to support your process, but you are still doing the thinking, the checking, the refining, and the final decision making, then you are still doing the work.

If you are using AI to skip the thinking, skip the research, skip the fact checking, and skip accountability, then the tool is no longer helping your process. It is totally replacing your responsibility.

The Second Type of Post I Write

Not every blog post starts with a pile of fresh research. Some of my posts come from something I already know deeply because I teach it. You can check out my recent post on Comic Book Art vs. Digital Painting as an example.

For example, when I write posts based on my Graphic Novel class, the process is different. In those cases, I am often starting with lesson plans I have already built, used, tested, and refined in the classroom. 


I have already done the research. I have already taught the lesson. I've already seen what confuses students, what clicks, and what needs a better explanation.

That gives me a strong draft to begin with, but it goes beyond that. Over time, I have used my notes on student reactions to make those lessons better before they ever become a blog post. 

I know which parts get real questions, which parts need to be slowed down, and which explanations make students’ eyes glaze over because I have seen it happen in real time. That classroom feedback helps me tighten the wording, explain things more clearly, and build a better draft before AI ever touches it.

From there, I can use AI to help shape that lesson material into a blog post format. Then I go back through it, re-read it, fix inconsistencies, and make sure it is ready for the blog. In that case, I am not asking AI to invent the substance. I am asking it to help me transform teaching material into something that another reader can learn from.

Once again, the value is not coming from the tool alone. It is coming from experience, judgment, and revision.

Why Transparency Still Matters

Some people hear this kind of process and say, “Well, if you are checking everything and shaping the article yourself, why mention AI at all?”

Because I think the audience deserves honesty. It comes down to trust.

If a creator is using AI in any meaningful way, I believe that should be presented to the audience. Not to shame the creator. Not to scare off the reader. Not to start some dramatic moral panic. 

It should be shared because the audience deserves to know how the work was made. From there, they can make their own decision and, if money is involved, decide with their wallets.

That matters to me, and us as a company.

We're in a strange moment right now. AI can make writing look clean, confident, and finished even when the person using it has done very little real work. That means readers have to be more careful. It also means writers, educators, and creators should be more honest about how these tools fit into the process. I know that this is a line in the sand, and I truly hope the following image isn't too "ON-THE-NOSE".


In fact, I was in the middle of writing this very article when the topic came up again in a work meeting. That timing made me laugh a little. Apparently this conversation has a way of finding me.

What stood out to me in that meeting was how differently people still see AI. Some people think in black and white. To them, AI is either completely wrong or completely fine. 

Others see it in shades of gray and understand that the tool itself is not really the whole issue. The bigger issue is how it is being used, whether the audience is being told, and whether anyone is taking responsibility for the final result.

That tension makes sense. This still feels like a new topic to a lot of people, even though AI has actually been around for much longer than most people realize. What is new is how visible it has become, how easy it is to access, and how fast it has moved into everyday creative and professional work.

I am not interested in pretending AI had nothing to do with the work when it clearly helped shape it. At the same time, I am also not interested in acting like AI created the article on its own, because that is not true either.

The truth is more grounded than that. I write with AI, not by AI.

Final Thoughts

Technology is going to keep moving forward whether we like it or not. That part is not up for debate.

What is up for debate is how we use it, how honest we are about it, and whether we stay accountable for what we put in front of other people. Thats the standard I care about.

I do not use AI to flood the internet with empty words just to say something got published. I use it to strengthen ideas, improve structure, challenge blind spots, and clean up the writing. Then I do the work of checking it, refining it, and making sure it says something real before it ever goes live. That is the line I personally draw.

And honestly, I think more people should draw one too.

To Be Continued


This conversation does not end with writing. In a future post, I will dig into AI art and what it means for creativity, originality, and teaching in the Graphic Novel classroom. 

I feel that this topic deserves its own space, especially as more artists and students wrestle with where inspiration ends and replacement begins.