How I Decide What Should Exist — and What Should Not
I wrote this framework because, early on, I noticed something most AI tutorials never talk about: the hard part isn’t making AI do what you want — it’s deciding when it should stop. This page isn’t a manual. It’s a guide to the moments that matter when you’re making decisions with AI.
First Principles
I’m not interested in gimmicks. I don’t use AI as a substitute for thought. If a system starts taking over decisions I feel responsible for, I step away from it.
If a system makes it harder to tell who is responsible for a decision, I don’t use it.
Constraints Come First
I define constraints before generating anything.
Not to limit creativity — but to protect it.
Constraints clarify:
- what the system is not allowed to do
- where aesthetic escalation becomes distortion
- which decisions remain human-only
If constraints are added after generation begins, they usually arrive too late. By then, the system has already shaped the outcome.
In one project, I chased polish long enough that the image lost the thing that drew me to it in the first place. That moment taught me to write constraints before prompts.
Iteration Is Evaluation, Not Improvement
I used to think more iterations meant better work. But I kept running into versions that looked better and felt quieter — and that was the exact opposite of what I needed.
Each pass asks the same questions:
- Is the core idea clearer, or just more polished?
- Did anything unnecessary get introduced?
- Is the work becoming safer, smoother, or more explanatory?
When the answer trends toward polish without added meaning, iteration has done its job.
Refusal Is Part of the Process
About halfway through, I found an output that was technically solid but About halfway through one project, I found an output that was technically solid but already heading in the wrong direction. It was tempting to save it — and that temptation was exactly the wrong signal. So I said no.
I reject work when it:
- adds drama where restraint is needed
- beautifies something meant to remain unresolved
- introduces narrative certainty that wasn’t earned
- feels impressive but hollow
Refusal isn’t a moral stance. It’s an editorial one.
Knowing When to Stop
Most over-generated work doesn’t fail loudly.
Most over-generated work doesn’t fail loudly.
It fails quietly — by becoming complete.
I stop when:
- emotional clarity is present but unresolved
- further refinement would explain too much
- the work begins to feel comfortable instead of honest
I still don’t know if I always stop at the right moment. Sometimes I pause because something feels unfinished, and that’s okay. The goal isn’t perfection — it’s truth to intention.
Stopping is how I keep authorship intact.
What This Framework Produces
What this framework produces — slowly — are fewer artifacts and more answers I can stand behind.
It also produces work that can be defended — not because it is perfect, but because each choice was intentional.
You can see this framework applied in practice throughout the AI Case Studies on this site.
Why This Matters
Tools keep evolving, but the need for judgment hasn’t changed in my work. If I hand over the hard calls to a system, I lose my voice. That’s why this matters to me personally.
This framework exists to slow things down just enough to notice what’s being lost — and to decide whether that loss is acceptable.
For me, it usually isn’t.
Closing Note
This framework will evolve as tools change.
The principles won’t.
Judgment is the work.
