Ottawa's AI Policy Quietly Rewrites Procurement Rules
Canadian federal AI policy is shifting from principles to purchasing power. New procurement clauses tied to the Algorithmic Impact Assessment and Treasury Board guidance are changing how vendors bid for Ottawa’s business, even before Bill C‑27’s AI regulations are final.
Canada’s federal AI policy is no longer living only in white papers. It is turning up in the fine print of government contracts, where new transparency and risk controls are quietly reshaping who wins federal work and how their systems are built. Departments are referencing the Treasury Board of Canada Secretariat’s Directive on Automated Decision-Making, asking bidders to show their risk practices, and, in some cases, promising public notice when algorithms factor into services. It is procurement as policy, and it is moving faster than legislation. What changed: Ottawa’s central digital policy suite already required departments to assess automated decision tools before launch. The Algorithmic Impact Assessment, a questionnaire that scores risk from Level I to IV, has been mandatory for relevant systems in federal use for several years. Over the past year, according to postings on CanadaBuys and departmental notices, those expectations have started to appear in requests for proposals. Vendors are asked to help complete or inform an AIA, to describe model testing and monitoring, and to identify whether generative AI is in the loop. The shift is incremental, but it raises the bar for