🚀 v0.1 — Live This is the launch cut. The slide deck, poll rounds, and interview sections from each council member are coming in future updates. The diagrams, the methodology, and the ROY framework are all here now.

I started drawing on a computer in the early 1990s with AutoCAD 10.

Since then, I’ve used just about every tool you can imagine to force ideas into visible form: house plans, network diagrams, process flows, mind maps, org charts, UML, crow’s-foot schemas, and enough hours in Visio to qualify for a minor emotional settlement.

For most of that time, the friction was not in the thinking. It was in the translation.

The hardest part was rarely understanding the idea. It was making the picture behave.

You had to pick the tool. Pick the shapes. Route the lines. Fight the canvas. And hope the final result looked enough like the thing in your head that someone else could understand it without a guided tour.

That worked. Sometimes. But it was expensive — not just in money, but in attention, effort, and delay. It was a Manual Labor Tax on every idea that deserved a visual but got a paragraph instead.

If you’ve ever opened Visio with a fully formed thought in your head and still wondered, How in the hell am I supposed to draw this thing? — you already know the tax.


From Drawing to Modeling

About eight months ago, ChatGPT coughed up a visual I didn’t ask for. It wasn’t an image. It wasn’t a screenshot. It was code. It was a text-based diagram. A flowchart. A skeleton of boxes and arrows that looked like it had been drawn in Visio but rendered in monospace.

I asked the only reasonable question: What the hell is that?

It replied: A Mermaid diagram.

I asked Copilot to generate one too. Then I discovered Microsoft Loop could render Mermaid natively. Suddenly I wasn’t “drawing” in the old sense anymore. I was describing structure in words and letting the rendering happen downstream.

That changed the economics. Mermaid plus LLMs turned diagramming from click-drag labor into text-native iteration. I could throw spaghetti against the wall at high velocity and minimal drag, then tighten the structure after the fact.


ROY: Return on Your Words

Once the rendering cost collapsed, the real question became unavoidable: What did the words actually buy?

The ROY Formula

Clarity Delivered ÷ Words Required to Deliver It

Return on Your Words = understanding extracted ÷ words invested

A 47-bullet Slack message? Poor ROY. A “quick summary” that still requires a meeting to decode? Negative ROY. A diagram generated from 50 well-chosen words that a team grasps in five seconds and uses correctly? Potentially extraordinary ROY.

That is why I don’t believe a picture is automatically worth 1,000 words. Some pictures are decorative wallpaper with a superiority complex. Some diagrams are just confusion with borders and connector lines.

The real test is simpler: Did the visual remove enough confusion to justify the effort it took to make it?

A picture is only worth the words it saves, the ambiguity it kills, and the momentum it creates.

The first diagram usually fails that test. It wants to look finished. It wants to reassure you that the path is known, the thinking is done, and the complexity has been tamed. Real thinking is uglier than that.

Real thinking loops. It forks. It backtracks. It trims its own nonsense. It discovers, usually late, that the clever version and the useful version are not the same thing.


Round 1 — The Clean Lie

To test the thesis, I ran the same brief through a Council of AIs: ChatGPT, Claude, Copilot, Gemini, Perplexity, Notion, and Replit. Same question. Same deliverables. Same demand: write the post, write the article, and diagram the actual ideation process behind them.

The first round showed exactly what I suspected. Most V1 diagrams were too tidy. Too linear. Too willing to perform certainty. The best of that first wave came from Copilot because it was the cleanest architecturally and the only one configuring Mermaid at the renderer level with YAML front matter.

🏆 Council V1 — Copilot (Round 1 Winner)
Copilot V1 — Round 1 winner Mermaid diagram showing ideation with renderer-level YAML configuration
View live Mermaid render ↓
--- config: theme: neutral fontFamily: Aptos, Segoe UI, Inter, Arial, sans-serif themeVariables: primaryColor: '#EAF2FF' primaryTextColor: '#14213D' primaryBorderColor: '#325DCC' lineColor: '#64748B' secondaryColor: '#FFF7E8' tertiaryColor: '#EEFDF3' fontFamily: Aptos, Segoe UI, Inter, Arial, sans-serif look: neo --- flowchart TB subgraph I1["Ideation / wording"] direction TD B["Brain-dump angles"] A["Define the clarity we are buying"] C["Draft ugly first post"] D{"Hook lands?"} E["Ask LLM for Mermaid skeleton"] end subgraph I2["Modeling / diagramming"] direction TD F["Render V1 🧭"] G{"Too tidy / too fake?"} H["Add forks, loops, dead ends"] I["Trim labels · reorder flow · kill clutter ✂️"] J["Diagram earns its space"] end subgraph I3["Post assembly / shipping"] direction TD P["Write around diagram:\nHook · thesis · proof · CTA"] Q{"Clear and scannable?"} R["Ship post + code + image set 🚀"] W["Watch comments for confusion"] end S(("Spark ✨")) --> T["👀 Notice text fatigue"] T --> U{"Text alone enough?"} U -- Yes --> P0["Stay in prose"] U -- No --> A A --> B B --> C C --> D D -- No --> B D -- Yes --> E E --> F F --> G G -- Yes --> H H --> I I --> F G -- No --> J J --> P P --> Q Q -- No --> I Q -- Yes --> R R --> W W -. feedback loop .-> B B:::core A:::core C:::draft D:::gate E:::core F:::draft G:::gate H:::risk I:::risk J:::publish P:::core Q:::gate R:::publish W:::low S:::spark T:::core U:::gate P0:::low classDef spark fill:#FFF4CC,stroke:#D6A700,color:#4A3B00,stroke-width:2.5px classDef core fill:#EAF2FF,stroke:#325DCC,color:#102A43,stroke-width:1.6px classDef draft fill:#F3F4F6,stroke:#6B7280,color:#334155,stroke-width:1.4px,stroke-dasharray: 5 3 classDef gate fill:#F3E8FF,stroke:#7C3AED,color:#3B0764,stroke-width:2px classDef risk fill:#FFF1F2,stroke:#DC2626,color:#7F1D1D,stroke-width:1.8px classDef publish fill:#ECFDF5,stroke:#059669,color:#064E3B,stroke-width:2.2px classDef low fill:#F8FAFC,stroke:#94A3B8,color:#64748B,stroke-width:1.2px,stroke-dasharray: 3 3 linkStyle 0 stroke:#325DCC,stroke-width:2.4px,fill:none linkStyle 3 stroke:#325DCC,stroke-width:2.2px,fill:none linkStyle 8 stroke:#DC2626,stroke-width:2px,stroke-dasharray: 6 4,fill:none linkStyle 12 stroke:#DC2626,stroke-width:2.2px,fill:none linkStyle 13 stroke:#DC2626,stroke-width:2.2px,stroke-dasharray: 6 4,fill:none linkStyle 14 stroke:#059669,stroke-width:2.8px,fill:none linkStyle 17 stroke:#DC2626,stroke-width:2.2px,stroke-dasharray: 6 4,fill:none linkStyle 18 stroke:#059669,stroke-width:2.8px,fill:none linkStyle 20 stroke:#7C3AED,stroke-width:2px,stroke-dasharray: 2 4,fill:none
View V1 Mermaid code
---
config:
  theme: neutral
  fontFamily: Aptos, Segoe UI, Inter, Arial, sans-serif
  themeVariables:
    primaryColor: '#EAF2FF'
    primaryTextColor: '#14213D'
    primaryBorderColor: '#325DCC'
    lineColor: '#64748B'
    secondaryColor: '#FFF7E8'
    tertiaryColor: '#EEFDF3'
    fontFamily: Aptos, Segoe UI, Inter, Arial, sans-serif
  look: neo
---
flowchart TB
 subgraph I1["Ideation / wording"]
    direction TD
        B["Brain-dump angles"]
        A["Define the clarity we are buying"]
        C["Draft ugly first post"]
        D{"Hook lands?"}
        E["Ask LLM for Mermaid skeleton"]
  end
 subgraph I2["Modeling / diagramming"]
    direction TD
        F["Render V1 🧭"]
        G{"Too tidy / too fake?"}
        H["Add forks, loops, dead ends"]
        I["Trim labels · reorder flow · kill clutter ✂️"]
        J["Diagram earns its space"]
  end
 subgraph I3["Post assembly / shipping"]
    direction TD
        P["Write around diagram:\nHook · thesis · proof · CTA"]
        Q{"Clear and scannable?"}
        R["Ship post + code + image set 🚀"]
        W["Watch comments for confusion"]
  end
    S(("Spark ✨")) --> T["👀 Notice text fatigue"]
    T --> U{"Text alone enough?"}
    U -- Yes --> P0["Stay in prose"]
    U -- No --> A
    A --> B
    B --> C
    C --> D
    D -- No --> B
    D -- Yes --> E
    E --> F
    F --> G
    G -- Yes --> H
    H --> I
    I --> F
    G -- No --> J
    J --> P
    P --> Q
    Q -- No --> I
    Q -- Yes --> R
    R --> W
    W -. feedback loop .-> B

     B:::core
     A:::core
     C:::draft
     D:::gate
     E:::core
     F:::draft
     G:::gate
     H:::risk
     I:::risk
     J:::publish
     P:::core
     Q:::gate
     R:::publish
     W:::low
     S:::spark
     T:::core
     U:::gate
     P0:::low
    classDef spark fill:#FFF4CC,stroke:#D6A700,color:#4A3B00,stroke-width:2.5px
    classDef core fill:#EAF2FF,stroke:#325DCC,color:#102A43,stroke-width:1.6px
    classDef draft fill:#F3F4F6,stroke:#6B7280,color:#334155,stroke-width:1.4px,stroke-dasharray: 5 3
    classDef gate fill:#F3E8FF,stroke:#7C3AED,color:#3B0764,stroke-width:2px
    classDef risk fill:#FFF1F2,stroke:#DC2626,color:#7F1D1D,stroke-width:1.8px
    classDef publish fill:#ECFDF5,stroke:#059669,color:#064E3B,stroke-width:2.2px
    classDef low fill:#F8FAFC,stroke:#94A3B8,color:#64748B,stroke-width:1.2px,stroke-dasharray: 3 3
    linkStyle 0 stroke:#325DCC,stroke-width:2.4px,fill:none
    linkStyle 3 stroke:#325DCC,stroke-width:2.2px,fill:none
    linkStyle 8 stroke:#DC2626,stroke-width:2px,stroke-dasharray: 6 4,fill:none
    linkStyle 12 stroke:#DC2626,stroke-width:2.2px,fill:none
    linkStyle 13 stroke:#DC2626,stroke-width:2.2px,stroke-dasharray: 6 4,fill:none
    linkStyle 14 stroke:#059669,stroke-width:2.8px,fill:none
    linkStyle 17 stroke:#DC2626,stroke-width:2.2px,stroke-dasharray: 6 4,fill:none
    linkStyle 18 stroke:#059669,stroke-width:2.8px,fill:none
    linkStyle 20 stroke:#7C3AED,stroke-width:2px,stroke-dasharray: 2 4,fill:none

Round 1 winner. Copilot earned it by operating at the renderer level — not just styling nodes, but configuring the theme engine itself.

V1 is important because it shows the temptation. Even when the structure is strong, the diagram still wants to sand off the confusion. That makes it easier to consume — and slightly less honest.


The Council of AIs

What I was running, without naming it at first, was not automation and not a pipeline. It was a deliberative body. A council is the body. The counsel is what it produces.

That distinction mattered because the value was never going to come from one model saying one brilliant thing once. The value came from parallel generation, comparative judgment, and hybrid synthesis.

Council roles that emerged in practice

Same brief. Different strengths. Better synthesis.

ChatGPT Strongest all-around scaffolding for article, post, and comment synthesis.
Claude Best narrative architecture and the most honest second-round diagram.
Copilot Cleanest renderer-level Mermaid configuration and strongest V1 discipline.
Perplexity Tightest ROY framing and disciplined explanation of the visual ROI thesis.
Gemini Maximalist energy, memorable coinage, and sharp theatrical framing.
Notion By the end, effectively the synthesizer, archivist, and court reporter.

The best result did not exist at the start. It emerged only after the council saw itself, scored itself, and tightened the work.

That, incidentally, is also the ROY lesson applied to AI itself. One prompt to one model is the 47-bullet Slack message. Multiple prompts to multiple models with synthesis and iteration in between is the diagram that earns its space, the picture that is worth the words, and the council that produces the counsel, in diagramed visual form.


Round 2 — The More Honest Drawing

Once every model had seen the others, the second round got better fast. The diagrams became less eager to perform certainty and more willing to show recursion. That is why Claude’s V2 won the second round: it made the loops visible.

Ignition. Strategy. Craft. Proof. Ship. Revision arrows threading back through all of them. That shape is the thesis.

⚙️ Council V2 — Claude Synthesis (Round 2 Winner)
Claude V2 — Round 2 winner Mermaid diagram showing ideation with visible revision loops as dashed arrows
View live Mermaid render ↓
--- config: theme: neutral fontFamily: Segoe UI, Inter, Arial, sans-serif themeVariables: primaryColor: '#EAF2FF' primaryTextColor: '#14213D' primaryBorderColor: '#325DCC' lineColor: '#64748B' secondaryColor: '#FFF7E8' tertiaryColor: '#EEFDF3' edgeLabelBackground: '#FFFFFF' clusterBkg: '#F8FAFC' clusterBorder: '#CBD5E0' fontFamily: Segoe UI, Inter, Arial, sans-serif look: neo layout: dagre --- flowchart TB subgraph IGNITION["⚡ Ignition — The Idea Finds You"] direction TB A(["💥 Spark 30 min of prose · nobody read it 30 sec of diagram · everyone got it"]) B[/"Hypothesis Not all pictures earn their words"/] C[["ROY Formula Clarity × Universality ÷ Words to generate"]] end subgraph STRATEGY["🧭 Strategy — Shape the Argument"] direction TB D{{"Does the framing hold?"}} E["Sharpen it Refine the ratio metaphor"] F(["✨ The Recursive Twist Diagram the post · inside the post"]) G{{"Too clever? Will readers get it or roll their eyes?"}} H["Dial it back Lead with utility · not wit"] end subgraph CRAFT["✏️ Craft — Build the Thing"] direction TB I[["Medium LinkedIn · 3K char · images in post · code in comments"]] J[["Tool Mermaid — LLM-native · free · no design degree"]] K["Draft Hook → Thesis → Diagram → Proof → CTA"] L{{"Hook landing?"}} M["Kill the clever Find the honest"] end subgraph PROOF["📐 Proof — The Diagram Earns Its Space"] direction TB N@{ label: "✏️ Embed It
This post's own process = the example" } O{{"Happy path? Does it show real ideation?"}} P["Rebuild Add forks · loops · dead ends · honest wrong turns"] end subgraph SHIP["🚀 Ship — Send It and Learn"] direction TB Q[["Publish Post + diagram image + code screenshot + raw Mermaid in comments"]] R{{"Did the visual land faster than words alone?"}} S(["✅ Thesis Confirmed The picture earned its words"]) T["Iterate Refine diagram · reframe post"] end subgraph BRAND["🧰 OverKill Hill P³™ · Precision · Protocol · Promptcraft"] direction LR U[["AskJamie™ Friendly AI Helpdesk"]] V[["Glee-fully Personalizable Tools™ 🌳 glee-fully.tools"]] W[["OverKill Hill P³™ linkedin.com/company/overkillhillp3"]] end A --> B B --> C C --> D D -- holds up --> F F --> G G -- earned --> I I --> J J --> K K --> L L -- lands --> N N --> O O -- yes — ship it --> Q Q --> R R -- high multiplier --> S S --> U & V & W E -. sharpen .-> C G -- too cute --> H H -. reframe .-> F L -- too stiff --> M M -. rewrite .-> K O -- too linear --> P P -. rebuild .-> N R -- missed --> T T -. loop back .-> K N@{ shape: stadium} A:::spark B:::think C:::formula D:::gate E:::revise F:::insight G:::gate H:::revise I:::core J:::core K:::core L:::gate M:::revise N:::insight O:::gate P:::revise Q:::publish R:::gate S:::win T:::revise U:::brand V:::brand W:::brand classDef spark fill:#FFF4CC,stroke:#D6A700,color:#4A3B00,stroke-width:2.5px,font-weight:bold classDef think fill:#EAF2FF,stroke:#325DCC,color:#102A43,stroke-width:1.5px classDef formula fill:#F0F4FF,stroke:#4A5CCC,color:#1a2060,stroke-width:1.5px,font-style:italic classDef gate fill:#F3E8FF,stroke:#7C3AED,color:#3B0764,stroke-width:2px classDef revise fill:#FFF1F2,stroke:#DC2626,color:#7F1D1D,stroke-width:1.8px classDef insight fill:#ECFDF5,stroke:#059669,color:#064E3B,stroke-width:2px,font-weight:bold classDef core fill:#EAF2FF,stroke:#325DCC,color:#102A43,stroke-width:1.5px classDef publish fill:#EDE7F6,stroke:#5E35B1,color:#311B92,stroke-width:2.2px,font-weight:bold classDef win fill:#D1FAE5,stroke:#059669,color:#064E3B,stroke-width:2.5px,font-weight:bold classDef brand fill:#F8FAFC,stroke:#4B5563,color:#111827,stroke-width:1.5px style IGNITION fill:#FFFBF0,stroke:#D6A700,stroke-width:1.5px,color:#4A3B00 style STRATEGY fill:#F5F3FF,stroke:#7C3AED,stroke-width:1.5px,color:#3B0764 style CRAFT fill:#EFF6FF,stroke:#325DCC,stroke-width:1.5px,color:#1E3A5F style PROOF fill:#F0FDF4,stroke:#059669,stroke-width:1.5px,color:#064E3B style SHIP fill:#F5F3FF,stroke:#5E35B1,stroke-width:1.5px,color:#311B92 style BRAND fill:#F1F5F9,stroke:#64748B,stroke-width:1.5px,color:#334155
View V2 Mermaid code
---
config:
  theme: neutral
  fontFamily: Segoe UI, Inter, Arial, sans-serif
  themeVariables:
    primaryColor: '#EAF2FF'
    primaryTextColor: '#14213D'
    primaryBorderColor: '#325DCC'
    lineColor: '#64748B'
    secondaryColor: '#FFF7E8'
    tertiaryColor: '#EEFDF3'
    edgeLabelBackground: '#FFFFFF'
    clusterBkg: '#F8FAFC'
    clusterBorder: '#CBD5E0'
    fontFamily: Segoe UI, Inter, Arial, sans-serif
  look: neo
  layout: dagre
---
flowchart TB
 subgraph IGNITION["⚡ Ignition — The Idea Finds You"]
    direction TB
        A(["💥 Spark
        30 min of prose · nobody read it
        30 sec of diagram · everyone got it"])
        B[/"Hypothesis
        Not all pictures earn their words"/]
        C[["ROY Formula
        Clarity × Universality ÷ Words to generate"]]
  end
 subgraph STRATEGY["🧭 Strategy — Shape the Argument"]
    direction TB
        D{{"Does the
        framing hold?"}}
        E["Sharpen it
        Refine the ratio metaphor"]
        F(["✨ The Recursive Twist
        Diagram the post · inside the post"])
        G{{"Too clever?
        Will readers get it
        or roll their eyes?"}}
        H["Dial it back
        Lead with utility · not wit"]
  end
 subgraph CRAFT["✏️ Craft — Build the Thing"]
    direction TB
        I[["Medium
        LinkedIn · 3K char · images in post · code in comments"]]
        J[["Tool
        Mermaid — LLM-native · free · no design degree"]]
        K["Draft
        Hook → Thesis → Diagram → Proof → CTA"]
        L{{"Hook landing?"}}
        M["Kill the clever
        Find the honest"]
  end
 subgraph PROOF["📐 Proof — The Diagram Earns Its Space"]
    direction TB
        N@{ label: "✏️ Embed It
This post's own process = the example" } O{{"Happy path? Does it show real ideation?"}} P["Rebuild Add forks · loops · dead ends · honest wrong turns"] end subgraph SHIP["🚀 Ship — Send It and Learn"] direction TB Q[["Publish Post + diagram image + code screenshot + raw Mermaid in comments"]] R{{"Did the visual land faster than words alone?"}} S(["✅ Thesis Confirmed The picture earned its words"]) T["Iterate Refine diagram · reframe post"] end subgraph BRAND["🧰 OverKill Hill P³™ · Precision · Protocol · Promptcraft"] direction LR U[["AskJamie™ Friendly AI Helpdesk"]] V[["Glee-fully Personalizable Tools™ 🌳 glee-fully.tools"]] W[["OverKill Hill P³™ linkedin.com/company/overkillhillp3"]] end A --> B B --> C C --> D D -- holds up --> F F --> G G -- earned --> I I --> J J --> K K --> L L -- lands --> N N --> O O -- yes — ship it --> Q Q --> R R -- high multiplier --> S S --> U & V & W E -. sharpen .-> C G -- too cute --> H H -. reframe .-> F L -- too stiff --> M M -. rewrite .-> K O -- too linear --> P P -. rebuild .-> N R -- missed --> T T -. loop back .-> K N@{ shape: stadium} A:::spark B:::think C:::formula D:::gate E:::revise F:::insight G:::gate H:::revise I:::core J:::core K:::core L:::gate M:::revise N:::insight O:::gate P:::revise Q:::publish R:::gate S:::win T:::revise U:::brand V:::brand W:::brand classDef spark fill:#FFF4CC,stroke:#D6A700,color:#4A3B00,stroke-width:2.5px,font-weight:bold classDef think fill:#EAF2FF,stroke:#325DCC,color:#102A43,stroke-width:1.5px classDef formula fill:#F0F4FF,stroke:#4A5CCC,color:#1a2060,stroke-width:1.5px,font-style:italic classDef gate fill:#F3E8FF,stroke:#7C3AED,color:#3B0764,stroke-width:2px classDef revise fill:#FFF1F2,stroke:#DC2626,color:#7F1D1D,stroke-width:1.8px classDef insight fill:#ECFDF5,stroke:#059669,color:#064E3B,stroke-width:2px,font-weight:bold classDef core fill:#EAF2FF,stroke:#325DCC,color:#102A43,stroke-width:1.5px classDef publish fill:#EDE7F6,stroke:#5E35B1,color:#311B92,stroke-width:2.2px,font-weight:bold classDef win fill:#D1FAE5,stroke:#059669,color:#064E3B,stroke-width:2.5px,font-weight:bold classDef brand fill:#F8FAFC,stroke:#4B5563,color:#111827,stroke-width:1.5px style IGNITION fill:#FFFBF0,stroke:#D6A700,stroke-width:1.5px,color:#4A3B00 style STRATEGY fill:#F5F3FF,stroke:#7C3AED,stroke-width:1.5px,color:#3B0764 style CRAFT fill:#EFF6FF,stroke:#325DCC,stroke-width:1.5px,color:#1E3A5F style PROOF fill:#F0FDF4,stroke:#059669,stroke-width:1.5px,color:#064E3B style SHIP fill:#F5F3FF,stroke:#5E35B1,stroke-width:1.5px,color:#311B92 style BRAND fill:#F1F5F9,stroke:#64748B,stroke-width:1.5px,color:#334155

Round 2 winner. Solid arrows = forward motion. Dashed arrows = the revision loops everyone always wants to hide.

This was the moment the experiment stopped being a gimmick and started behaving like a method. The diagrams were no longer just illustrating the argument. They were auditing it.


Prompts in the Wild

Velocity, iteration, and vibe over premature prompt perfection

One of the points of this experiment is that the prompts were not highly polished.

They were not lab-grade.
They were not deeply pre-scripted.
They were not engineered like final-form production assets.

Most of them started as live dictation — quick, directional, natural-language prompts spoken in the moment while the idea was still moving.

That matters.

If every prompt had to be honed, polished, and weaponized before I could use it, the process would lose much of what made it valuable in the first place. The friction would rise. The mental cost would rise. The velocity would drop. And at some point I may as well go back to a traditional tool and manually draw the thing in Visio.

That is part of the thesis here: prompt precision has a cost.

Sometimes that cost is worth paying.
Sometimes it is not.

A perfect prompt is not always a high-ROY prompt.

In exploratory work — especially when using Mermaid to sketch, iterate, and surface structure — I care less about crafting the perfect prompt and more about maintaining forward motion.

These prompts were lightly edited for readability only. Grammar corrected, duplicate phrasing from live dictation removed, word overhead trimmed. The original shape and intent were preserved intentionally.

That is the point.

Prompting in practice is often fast, natural, and iterative.

A note on fairness

The brief was consistent. The conditions were not.

The core council members received materially similar direction and deliverable expectations. A few caveats matter:

ChatGPT Claude Copilot Perplexity Gemini Notion Replit

Gemini ran on the free tier, the only council member with a capability ceiling imposed by tooling cost rather than the brief. Noted here. Not hidden.

Notion acted primarily as synthesizer, archivist, and court reporter rather than a voting peer.

Replit arrived later and was not part of the initial apples-to-apples round. The point of the exercise was not laboratory perfection, it was high-velocity comparative ideation with enough consistency to make the results useful.

ChatGPT Pro is a separate diagram (ChatGPT V2pro) produced using access to a Pro-tier reasoning model — a capability level no other council member had available during this session. It is included in the archive for transparency, not for direct comparison. Think of it as an exhibition entry: same brief, higher ceiling, different category.

Prompt 01 — The Seed

The idea started as a business-style ROI question, then bent toward pictures, words, and meaning. No deliverable yet. Just finding the governing idea.

View prompt
Thinking like a professor of business and investment, what is a realistic but attractive return on investment?  There is an old adage that a picture is worth 1,000 words. If a diagram can be generated from words, what would be a good ROY — return on your words? If it takes more words to generate the picture than the picture returns in meaning, understanding, or compression, the investment may be poor. But if a picture created from words lands with more meaning than the words it took to create it, that seems valuable.  What ratio of words to meaning, understanding, and structure makes this a good investment?

This prompt did not start with a deliverable. It started by finding the frame.

Prompt 02 — The Meta Turn

The project becomes recursive: the post is about diagrams, and the diagram explains the making of the post. Abstract stops here. Self-demonstrating begins.

View prompt
I'm considering a LinkedIn post/article about the visual value of pictures and understanding — specifically the value of Mermaid diagrams, Mermaid tooling, and the ability to create Mermaid diagrams with most LLMs.
I want the post itself to include a Mermaid diagram showing the ideation process of creating the post. So the post would talk about the value of using words to generate pictures, and the picture would be the steps involved in writing the post itself.
What are the steps in creating a LinkedIn post about visual value, visual ROI, and the way a picture can sometimes be worth less — or much more — than a thousand words?

This is where the work stopped being abstract and started becoming self-demonstrating.

Prompt 03 — The Happy-Path Lie

The first diagram was too clean. This prompt forced honesty back into the structure — the shortest prompt in the sequence and arguably the most important.

View prompt
Ideation, thought, and writing are not perfectly linear. This initial Mermaid diagram shows a straight happy path from start to finish, but that is not honest.
Generating a diagram from words involves revision. Revision involves forks and loopbacks — a non-happy path. So it would be better if the diagram showed a non-ideal path rather than a straight one, with slight visual flair: restrained color variation, selective arrow styling, subtle hierarchy, and conservative emoji use. Enough to add clarity and pizzazz, but not enough to become noise.

This is the moment the diagram stops being decorative and starts being truthful.

Prompt 04 — Voice and Brand Lock-In

Once the structure was right, the output needed to sound like it belonged to the same builder and brand. Style calibration through example, not description.

View prompt
Create a tone-matched LinkedIn post that carries the same professional snark, swagger, and practical voice I tend to use. Strong hooks, good CTA structure, reinforcing my brand, driving interest toward OverKill Hill P³™, Glee-fully Personalizable Tools™, and AskJamie™.
Here is a sample of my recent LinkedIn style and article voice: [reference material pasted in full]

The work had to be structurally sound and recognizably mine.

Prompt 05 — The Council Becomes Formal

The process evolves from drafting into comparative judgment and synthesis.

View prompt
This is the final competition, an art contest where you judge your design against your peers.
First: identify the winner that best depicts the theme and best uses the current features available within Mermaid to showcase the art of the possible.
Second: learn from your self-evaluation and from your peers, then create a V2 that is more awe-inspiring for a LinkedIn audience.

This is where prompting stops being single-model generation and becomes council orchestration.

Prompt 06 — The Hybrid Request

Stop drafting. Start building. Synthesis into a best-of hybridization — post and final Mermaid diagram together.

View prompt
Using the elements from all provided versions, create a best-of hybridization post. Add new elements based on what you’ve learned from your peers and from me.
Then create a Mermaid diagram to accompany it — pulling every useful feature from Mermaid’s toolbox.
Constraints: post can’t exceed 3,000 characters, should contain links, I have a Mermaid account and can share a link to the final diagram, and I plan to use images of both the code and the rendered diagram to maximize character usage and minimize rendering issues.

The hybrid outputs became the material for the final published LinkedIn post. Claude’s structure. ChatGPT’s hook. Perplexity’s ROY framing. Copilot’s Mermaid configuration.

Prompt 07 — The Comment (Origin Story)

Raw autobiographical input. The model’s job was editorial, not creative: shape a live-dictated story into a LinkedIn comment without sanitizing it.

View prompt
Now I want to craft my first comment to my own post.
I began drawing pictures on a computer using AutoCAD 10 around 1991. Since then I’ve used numerous tools to put an idea into visual form — house plans, network diagrams, process flows, mind maps, org charts, UML, crow’s-foot database schemas. The list goes on.
About eight months ago, ChatGPT coughed up a random visual for a simple process I was describing. I asked what the hell it was. It replied: a Mermaid diagram.
So I asked Copilot to generate one too, then discovered that Microsoft Loop (Copilot Pages) could render Mermaid natively.
I can’t begin to count the hours I’ve stared at the Visio UI thinking: how in the hell am I going to draw this?
Now I can just throw spaghetti against the wall at high velocity and minimal drag.

The AutoCAD 10 origin story, the Visio framing, and the spaghetti metaphor all survived into the published comment.

Prompt 08 — The Diagram Competition

Final art contest. All five core council members judged their V1 peers and created V2 diagrams. This prompt produced both round winners.

View prompt
This is the final competition — an art contest where you judge your design against your peers: ChatGPT, Claude, Perplexity, Copilot, and Notion. The plan is to create separate comments on my LinkedIn post with links to the Mermaid-hosted results from each model.
First: identify the top performer — the diagram that best depicts the theme and makes the most of Mermaid’s current feature set. Showcase the art of the possible (pun intended).
Second: learning from your own self-evaluation and your peers’ work, create a V2 that raises the bar for what a Mermaid diagram can do for a LinkedIn audience.

Round 1 top performer: Copilot V1 — renderer-level YAML configuration. Round 2 top performer: Claude V2 — honest recursion, visible revision loops.

Eight prompts. One session. No template. No prompt framework. No pre-scripted structure.

Prompt 01 didn't mention LinkedIn or Mermaid.
Prompt 03 corrected a structural lie in a single paragraph.
Prompt 05 ran a five-model art competition with a pun in the brief.
Prompt 06 assembled the final synthesis from all five council outputs.
Prompt 08 ran the art contest that produced the two round winners.
None of them were polished before sending. All of them worked.

That's not an argument against precision. It's an argument for knowing when velocity is the right tool — and when the diagram you need is cheap enough to draft twice rather than plan once.

All 8 prompts in full, the council brief, and every Mermaid source file are in the public repository: github.com/OKHP3/first-diagram-is-a-liar ↗


Launch Artifacts

Because this began as a LinkedIn experiment, the launch assets matter. The article is the canonical home. The post is the spear. The first comment is the afterburner.

LinkedIn Article — v0.1

The original published piece. The canonical home of the thesis, the council run, and both diagrams.

View article v0.1 copy
The First Diagram Is Usually a Liar

I’ve been drawing pictures on computers since the early 1990s.

Back then, it was AutoCAD R10: black screens, command lines, coordinates instead of cursors. If you wanted to express an idea visually, you did not drag shapes around. You constructed it, line by line, with the same kind of concentration you would use writing a careful spec. That early experience baked in a bias I still carry. A diagram is not decoration. It is a claim about how reality is structured.

And the first claim is almost always wrong.

Over the decades I have used just about every diagramming flavor I could get my hands on house plans, network diagrams, process flows, mind maps, org charts, UML, and crow’s-foot schemas. Some tools were elegant. Many were painful. Nearly all of them charged the same tax: the cost of translating meaning into visuals. That tax is why people default to text, even when text is a lousy medium for structure. That is how we ended up with the modern classics: the 47-bullet Slack message, the “quick summary” that reads like a novella, and the doc everyone bookmarks, and nobody finishes. We did not lack expertise. We lacked compression.

This is v0.1, the first proper build. The article is going to evolve in public. Later revisions will add the winning slide deck, direct links to the live Mermaid renders, public poll rounds where readers can judge the diagrams for themselves, and some behind-the-scenes notes on how the Council of AIs argued its way toward the stronger work. For now, this is the compressed cut.

People do not fail to read because they are lazy. They fail to read because their capacity to absorb meaning is finite and the workday is an attention shredder. Text is excellent for nuance and precision, but it is inefficient at showing flow. It hides forks and dead ends. It forces the reader to reconstruct a system in their head, one sentence at a time, without ever seeing the shape of it.

That reconstruction is the expensive part.
You can watch it happen in real time: somebody nods through the explanation, then asks a question that proves they assembled a completely different machine in their mind. Nobody is stupid. The medium simply did not carry the structure. This is where diagrams earn their keep, when they are honest. Not pretty. Not polished. Honest.

A lot of diagrams are not honest. They are optimism with arrows. They erase the messy edges, exceptions, feedback loops, and human bottlenecks because the author wants to look decisive, or because the tool quietly pressures everything toward tidiness. That is why the first diagram is usually a liar.

Not maliciously. Prematurely.

It shows what we hope is true before we have done enough work to see what is actually there.

Copilot V1 flowchart — Council of AIs Round 1 winner. Shows the ideation process for a LinkedIn post about visual ROI, including decision gates, revision loops, and non-linear thinking paths. Generated using GPT 5.4 Thinking.
Copilot V1, generated using GPT 5.4 Thinking
Copilot V1, generated using GPT 5.4 Thinking. Top-scoring first-round submission from the Council of AIs evaluation. It earns its place here because it demonstrates the argument instead of merely describing it: a diagram that shows structural thinking without hiding the decision points, the branches, or the places where the process could go wrong.

The diagram above was generated from words. That is the point. The visual was purchased with language and judged by the understanding it produced.
ROY: Return on Your Words
I have started thinking about this through a metric I call ROY, Return on Your Words. Not how many words you wrote. Not how polished they sounded. ROY is the amount of understanding produced per unit of explanation invested.

If you write 800 words and the reader still needs a meeting to understand the system, the ROY is poor. If you write 80 words that generate a picture which makes the system obvious, the ROY can be excellent, even if the picture is not beautiful.

The point is not elegance.
The point is whether the words bought clarity.

That is the question I keep coming back to: did the visual remove enough confusion to justify the effort it took to make it?

For most of my career, the answer was often no, simply because the production cost was too high. Visio and its cousins made diagrams possible, but they also created friction. You were not just thinking about the system. You were wrestling the canvas: alignment, spacing, connectors, swim lanes, and that familiar moment when you are ten minutes deep into nudging boxes and have forgotten what you were trying to explain in the first place. The tool did not just cost time. It pulled you out of the mental mode required to do the work. So, diagrams became presentation artifacts, something you created at the end after the thinking was supposedly finished.

But the thinking was never finished.

That is the point.

When Diagrams Stopped Being “Drawing”
About eight months ago, ChatGPT casually generated a visual for a simple process I was describing. My first reaction was not awe. It was suspicion. It looked too tidy. Then I asked what it was, and it answered: a Mermaid diagram.

That was the pivot.

Mermaid is a text-based diagramming language. You describe structure in plain language, and it renders a diagram. No canvas anxiety. No toolbar archaeology. You are not arranging shapes so much as declaring relationships. Then I discovered Microsoft Loop could render Mermaid natively, and that changed the economics in a way I did not expect. Suddenly the cost curve collapsed. Diagramming stopped feeling like a separate task and started feeling like a mode of thought, structural thinking expressed in text.

When the cost of making a diagram collapses, the diagram can afford to be wrong. A bad first draft stops being a failure and becomes part of the reasoning process. I could draft the lie quickly, spot the lie quickly, and iterate toward something truer.

That changes the game.

The Council of AIs, and Why It Works
That is also when my workflow expanded. Instead of asking one model for one answer, I started asking several: ChatGPT, Claude, Copilot, Perplexity, Gemini, and Notion. Same prompt. Same goal. Independent outputs. Then I pulled them back together and compared the differences.

I call it the Council of AIs, and it is less about outsourcing intelligence and more about manufacturing perspective. The value is not that one model is correct. The value is that disagreement reveals structure.

One model assumes a clean pipeline. Another introduces a feedback loop. Another surfaces an ambiguous handoff. Another exposes a missing actor. When I compare those outputs side by side, I get less performance and more perspective.

And at that point, the diagram becomes the scoreboard.

Does the idea actually have shape?

Or is it still fog with confident nouns?

That comparison process becomes more visible in the next revisions: the first-round and second-round contenders, the Mermaid renders, and the public poll rounds so readers can judge for themselves instead of taking my word for it.

Claude V2 flowchart — Council of AIs Round 2 top scorer. Five swim lanes showing Ignition, Strategy, Craft, Proof, and Ship phases with dashed revision loops and labeled loopback arrows. Generated using Sonnet 4.6 Extended Thinking.
Claude V2, generated using Sonnet 4.6 Extended Thinking
Claude V2, generated using Sonnet 4.6 Extended Thinking. Top-scoring second-round submission after the council saw the first-round work and was pushed to improve. It earns its place here because the form matches the argument: revision loops stay visible, phases stay distinct, and the diagram stops pretending ideation is linear.

Here’s the brief I ran across all six models:
Same brief to 6 models.
Do not read each other.

Deliver:
a 3-part thesis
a Mermaid diagram (first draft)
3 failure modes (where the diagram lies)

Then: compare, adjudicate, synthesize.
The value is not in the words themselves. The value is in the meaning extracted from them. If 75 words of structured prompting generate a diagram that saves five minutes of explanation, prevents one misunderstanding, and helps a team align faster, that picture may be worth more than the words that created it. Not because the words disappeared. Because the words were invested.

That is the work I keep circling back to at OverKill Hill P3, Precision, Protocol, Promptcraft. Words into structure, structure into systems, systems into leverage. Sometimes the right output is a paragraph. Sometimes it is a diagram. Sometimes it is both. But the output has to earn its space.

And yes, this article is part of the experiment too. It is going to be revised in public, diagrammed, judged, and probably accused of overthinking itself before it is done. A few things are landing here over the next few weeks: the full ETCH-AI-SKETCH slide deck, the Round 1 poll where readers vote on the diagrams, the Round 2 redesigns, and a section where each model gets interviewed about its own work in its own voice, including whether it thinks it should have won.

This article is the protoform. The interesting parts are still being forged.

A picture can absolutely be worth more than the words that prompted it into existence, but only if those words buy clarity, compression, and shared understanding.
If the visual can’t do that, it is still just wallpaper with a superiority complex.
Read on LinkedIn ↗

LinkedIn Post — Return on Your Words (ROY)

The feed-native cut. Short enough to move, sharp enough to stop the scroll.

View launch post copy
Return on Your Words (ROY)

I have a fundamental distrust of the first diagram. It is almost always a lie. Not a malicious lie, but a premature one; too clean, too linear, and far too eager to pretend the thinking moved neatly from A to B without the detours through confusion, dead ends, and bad ideas wearing professional clothes. This experiment started on March 9, and after a month of generating diagrams around an idea I could not shake, I trust the direction enough to say the work is real, even if it is still evolving.

I am pro-diagram, but anti-diagram-as-performance. I started drawing on computers with AutoCAD R10 in the early 90s and have spent more hours than I care to admit inside Visio, fighting the canvas harder than the problem. What I have grown skeptical of are visuals that look finished mainly because they circulate well, while quietly deleting everything inconvenient about how the work actually happened.

What changed for me was not the aesthetics but the economics. Once diagrams became cheap enough to generate and revise directly from words, they stopped being end-of-project artifacts and started becoming thinking tools. That shift led me to a metric I call ROY, Return on Your Words. The real question is not whether a picture is worth a thousand words. It is whether the words you invested bought actual shared understanding, or just another paragraph someone skims and forgets.

I pushed this further by running the same brief through a Council of AIs to manufacture perspective. The value is not in finding one correct model; it is in seeing how disagreement reveals structure. Six models, same prompt, radically different instincts about what honesty looks like in a flowchart. The differences told me more than any single output could.

This v0.1 article is the first public cut of that experiment. If you work in systems, architecture, or any domain where too many words are doing too little work, it will probably feel familiar. It will not look the same when you come back, and that is intentional.

Read the current version here:

https://www.linkedin.com/pulse/first-diagram-usually-liar-jamie-hill-lv3hc

What is the lowest-ROY artifact in your world right now, the thing that takes the most explanation but creates the least shared understanding?

#Mermaid #VisualThinking #SystemsThinking #AIInPractice #PromptCraft #OverKillHillP3

Copilot V1 and Claude V2 are below. They are the bookends of the experiment.

LinkedIn Comment 1 — Diagramming Trauma (Visio Survivors Group)

The personal addendum: AutoCAD 10, Visio paralysis, and the moment Mermaid repealed the click-drag tax.

View first comment copy
Diagramming Trauma (Visio Survivors Group)

AutoCAD R10 in the early 90s is where I learned diagramming the hard way, and years of Visio followed, often 30% thinking and 70% fighting the canvas. That left me with a lasting bias: a diagram is a claim about how reality is structured, and if the tool is heavier than the thought, the diagram is already lying before you finish drawing it.

That is why @Mermaid was the real pivot for me. Once the draw-the-boxes part became a low-cost byproduct of the prompt, the structure became easier to see and easier to challenge. Diagrams stopped being polished end products and started becoming thinking surfaces. They became cheap enough to be wrong on purpose, which is the only way to eventually get them right.

What is one workflow in your world you have avoided diagramming because the tooling cost was higher than the thinking value?

Notion V1 is below. It is the quietest entry in the council, and worth a look for exactly that reason.

The Thesis, Standing on Its Own

A picture can absolutely be worth more than the words that prompted it into existence.

Not because the words disappeared. Not because reading stopped mattering. But because the words were invested in a form that improved assimilation.

That is the work I keep circling back to at OverKill Hill P³™: words → structure → systems → leverage.

The tools will keep changing. The interfaces will keep changing. The models will keep changing. The question underneath them is not going away.

When is meaning best delivered as words — and when is it better delivered as the visual shape those words can bring into existence?

This page is not the last version. It is the first version that has earned its permanence.