Ideation logbook
Multi-agent brainstorming run where many ideas accumulate, get scored, and converge toward a short list. A flat CSV is the substrate; the CLI is where the queries live.
idSchema
| Column | Meaning |
|---|---|
| id | Sequential identifier, stable across patches and exports. |
| name | Short title — 3 to 8 words. |
| description | Prose payload. One paragraph. Free-form goes here, not in extra columns. |
| source_agent | Which specialist generated this seed (or human). |
| phase | seed, scamper, six-hats, reverse, synectics. |
| tag | Cluster label — filled by the tagging pass, not the generator. |
| impact | ICE: expected value if it works (1–10). |
| confidence | ICE: how sure we are it'll work (1–10). |
| ease | ICE: how cheap it is to try (1–10). |
| ice_score | Computed (impact × confidence × ease) / 10. Re-computed on patch. |
Sample data
Common queries
$ logbook filter phase=synectics
$ logbook sort ice_score --desc | head -20
$ logbook top 10 ice_score --group tag
$ logbook compute 'ice_score = (impact*confidence*ease)/10'
Humans read the converge output — not the CSV directly — but the CSV is what makes the queries cheap. A single idea's full prose lives in one cell; the interesting questions ("top 10 by tag") never need to load that prose into context.
Actions
Apply to Miro. Group top-N rows by tag; export each as a sticky with provenance. The workshop board is the next stage, not a mirror.
Generate report. Render the top 20 rows with full descriptions into a converge memo for the facilitator. Snapshot, regenerated on the next pass.
Visualize. Scatter of ease vs impact with ice_score bubble size, colored by tag.
Hundreds of ideas, many agents appending in parallel, humans reviewing through filters and clusters. Rereading the whole file every time a new agent joins would be prohibitive. Patch-in-place for score updates; no supersession rows.