Building in public means being honest about what didn't work. These tools were real experiments. Some had the right concept but the wrong implementation. We document them here because the reasoning matters.
A six-persona AI review panel that evaluated nonprofit fundraising appeals and grant narratives. Each persona represented a distinct donor or funder perspective — returning a score, qualitative feedback, and a revision roadmap.
Security. The tool required users to paste real donor data, appeal copy, and grant narratives into a shared AI interface. That creates real risk — data handling, confidentiality, and organizational liability — that we weren't comfortable with once we understood it more deeply.
The concept is sound. If this comes back, it comes back with proper data architecture, no shared inputs, clear security documentation, and explicit guidance on what should and shouldn't go into an AI tool.