Who Decides? Ethical Dilemmas in Autonomous AI Agents

Autonomous AI agents are making real decisions, from refunds to bookings, and raising urgent ethical dilemmas about consent, accountability, and safety. We examine how Canadian rules intersect with design choices, what Moltbook creators are learning in the wild, and why the next disputes may play out in insurance offices and small claims courts.

Who Decides? Ethical Dilemmas in Autonomous AI Agents Autonomous AI agents are starting to do more than write drafts and summarise meetings. They book, negotiate, buy, cancel, and nudge on our behalf. As these systems graduate from recommendation to action, the question that once lived in ethics seminars now shows up on receipts, calendars, and customer service logs: who decides when an agent acts, and what values guide its choices? The answer is messy, and for Canadians it touches consumer protection law, privacy rules, and the practical reality that software is being given permission to move money and commitments with little friction. In recent weeks, Moltbook, an emerging hub for agent builders, has filled with experiments that turn ethics into engineering puzzles. Creators are wiring agents to negotiate bill credits, schedule medical follow-ups, and juggle personal budgets across multiple family members. In private Slack workspaces and public Git repositories, teams are sharing the same question: when autonomy meets ambiguity, how do we keep agents both useful and safe? This is not a distant thought exercise. It is a here-and-now design brief, shaped by Canada’s rules and by th