Skip to main content
less artificial less artificial

How I Make This Blog with AI

My workflow for writing, narrating, illustrating, and coding Less Artificial—without outsourcing the thinking.

I get asked often how I use AI.

The honest answer is: I use AI everywhere. But I don’t outsource the thinking.

This post is my attempt to be concrete about what that means:

  • what tools I use (writing, audio, images, code),
  • how I use them,
  • and the ethical line I try to hold while I do it.

My north stars

There are a million hot takes about “AI in creativity,” but my view is pretty boring:

  1. Tools don’t automatically make art meaningless. Humans have always used tools.
  2. But tools can absolutely make outputs shallow if they replace the part where you decide what you mean.
  3. So what matters is process: where the intent comes from, how much judgment is applied, and whether the output says something real.

A line that stuck with me is that when the cost of answers drops toward zero, the value of the question becomes everything.

That framing from Art of the Problem [aop2025] (a great YouTuber) matches my perspective: if you feed an AI tiny, vapid inputs, you mostly get generic, uninteresting outputs. If you feed it your actual thinking, such as notes, structure, constraints, and drafts, you can accelerate your ability to create something meaningful [1] .

But a key point, that I will expand on in future posts, is that how those tools were built is currently why there’s understandable outrage about the use of AI in creative areas. AI is not inherently evil or harmful, but the companies building them often overlook the ethical hurdles that come with them. In my view, the current state (and perception) of AI is a result of a race to build the most powerful models, without enough consideration for the ethical implications.

That being said, regarding concrete tools, I use a lot of different ones. I’ll cover some of them below.

Narration: as accessible communication

I publish audio for posts because a lot of people want to listen while walking, commuting, or doing chores. If I care about “communication,” I should probably meet people where they are. I also think it is a great way to make the blog more accessible.

What I’m using

I generate narration using the Chatterbox TTS model from Resemble AI [chatter2025] .

Here’s what matters for my workflow:

  • I provide a few minutes of my own recorded speech as the Zero-shot “reference clip” [2] .
  • It has control knobs (like exaggeration/intensity), which help a lot for not sounding like a monotone GPS.
  • It includes built-in watermarking in generated audio.

I run it locally on my NVIDIA GeForce RTX 4080, which means:

  • I don’t have to upload raw voice recordings anywhere until I’m ready to publish,
  • I can easily align the audio with the text,
  • and I can iterate quickly when a sentence comes out weird.

Future goals

Longer term, I’d like to pair TTS with a strong translation pipeline: translate a post into other languages and have it spoken in my voice, not a generic narrator. That’s not fully baked yet, but it’s on the horizon.

Writing: as an editor

Let me draw a bright line:

  • All ideas are mine.
  • The initial outline is mine.
  • The first draft is heavily mine.
  • AI is an editor, a sparring partner, and a clarity coach.

The model and the environment

I currently use OpenAI’s GPT‑5.2 model [openai2025] , but I expect to swap them over time. Models change quickly, and I want to stay on the cutting edge.

I run this in LibreChat locally, which gives me a clean interface, lets me keep my workflow local, and supports a bunch of integrations—including MCP tooling—when I want it [librechat2025] .

Verification and tools (MCP)

When I’m writing technical content, the worst failure mode is a confident, wrong sentence. So I like giving my writing assistant the ability to check things. For example, I can ask it to look up a source, cross-check a claim, or verify a number.

MCP (Model Context Protocol) is an open standard for connecting an AI app to external tools and data sources [mcp] —think “search,” “calculator,” “local notes,” etc.

That doesn’t magically make it correct, but it does shift the workflow from “pure autocomplete” to “collaborative fact-checking.”

My writing rule: outlines are non-negotiable

I always start with an outline that is detailed enough that I could write the post without AI.

Then I use AI to:

  • propose alternative phrasing when I’m stuck,
  • test whether a paragraph is readable,
  • or generate counterarguments.

This is where that “value of the question” framing [aop2025] becomes practical: you don’t get originality by asking for originality. You get it by providing your actual lived experience, constraints, taste, and goals.

My research process

A lot of what I write comes from:

Obsidian [obsidian] is useful to me because it’s local-first, linkable, and flexible—basically a good home for “half-finished thoughts” that later become posts.

Images: as a tool

For the art on this site (which is not much), the point is not “make something pretty.” The point is “make something that helps the reader think.”

So my workflow starts analog:

  • I sketch ideas myself, or my fiancée Emelia Joe-Gammill sketches them with me.
  • Then I use image models to iterate toward a final asset.

The models I’m using right now

I’ve been flip-flopping between GPT Image 1.5 [gptimg2025] and Nano Banana Pro [banana2025] because they have different strengths depending on the task (editing vs. composition vs. typography).

My personal ethics on image gen

I avoid the “upload someone else’s art and ask for variations” pattern.

Not because it’s technically impossible (it obviously is possible), but because it shortcuts the part that makes creative work meaningful: intent + taste + iteration.

My rule is:

If I didn’t put meaningful creative input into the process, I don’t publish the output.

Look out for a future post on the ethics of image generation, and how these datasets were collected, what it means for both legality and ethics, and how the future of AI will be shaped by this.

Coding: as an accelerant

I’ve been programming most of my life, so when I use AI in code, it feels less like “vibe coding” and more like an extremely powerful autocomplete—with the added bonus that it can read the whole repo and notice obvious mistakes.

I use Cursor as my main coding environment.

Cursor is built explicitly around “coding with AI” workflows (autocomplete, agentic edits, codebase indexing, etc.) [cursor] .

My primary model for programming in Cursor is Claude Opus 4.5.

Anthropic frames Opus 4.5 as a major step for coding and agentic work [opus2025] —which matches my experience: it’s especially good at multi-file refactors, UI polish, and “I know what I want, but not the exact API”.

The real benefit: velocity

The biggest win for me is time. Using AI, I can prototype faster, ship features faster, and spend more time on the parts of the site I care about: readability, accessibility, and user experience.

But I try to stay honest about the tradeoff: if you let an AI do everything, you don’t build skill. If you use it as an amplifier while staying engaged, you often learn faster [3] .

Transparency and watermarking

I try to make two things true at once:

  1. AI tools are part of my process.
  2. Readers should never be misled about that.

On the tooling side:

  • Chatterbox includes watermarking in generated audio outputs.
  • Google explicitly discusses SynthID watermarking for images generated by its tools.

On the publishing side:

  • I clearly disclose AI usage on the site.
  • And I aim to keep the work “human-led,” meaning the creative intent is mine, and the final artifact is something I’m willing to stand behind.

The stance, plainly

If I had to compress my stance into one paragraph:

I don’t think the moral question is “did you use AI.” I think the question is:

  • Did you do the thinking?
  • Did you add meaning?
  • Did you respect other people’s work and consent?
  • Did you disclose what you did?

AI can be used to produce an infinite amount of empty content. It can also be used to help a person with something real to say communicate more clearly.

I’m trying—imperfectly—to stay on the second path.

What I’d like to improve next

A few “future workflow” goals:

  • Multilingual publishing: translate posts and narrate them in my own voice.
  • More reproducibility: keep more “how this post was made” artifacts (outline snapshots, reference lists, prompts I used).
  • Better accessibility: continue treating accessibility as part of “quality” [4] .

Until next time, let’s make intelligence less artificial.