Our Digital Coworkers Have Arrived
And Nobody Introduced Us
We're being told to adopt AI Agents or get left behind. But most of us have no idea how they actually work, which means we're handing control to systems on blind faith. Therefore, we need to shine a light on what's really driving them.
Not Just a Storyteller
Not Just an Engineer
Both
Ron Ploof's career as an analog engineer and lifetime storyteller makes him a student of human nature and not afraid to roll up his sleeves and get technical.
By combining the lens of a storyteller with the analytical discipline of an engineer, he researches AI agents to understand how they work, how they fail, and how we can build them to work as planned.
What Ron Investigates
How Agents Work
Opening the hood on the mechanics driving today's AI Agent systems.
How They Fail
Identifying the failure modes, hallucinations, and trust gaps.
Building Agents Right
Optimizing AI agent design while mitigating risk to align with the humans who use them.
Risk Analysis Tool for Agents
AI agents produce probabilistic outcomes, and companies have ways to manage those uncertainties. But once an AI agent acts on its own, oversight disappears and organizations are suddenly exposed to unforeseen financial, legal, and reputational risks. Therefore, we propose a Systematic Agent Risk Assessment (SARA) to quantify those risks before deployment.
Explore Project Open SARA →