The proposed policy — it is still a draft, and could be watered down — would require each department to appoint a chief AI officer and have them provide a register of existing AI use cases. This alone is a significant win for transparency. But, in addition, the officer must identify systems that will potentially impact people’s safety or rights, which are subject to further meaningful constraints.
Impressively, the memo even addresses the deeply unsexy, but important, question of government procurement of AI. So many societal problems with AI start with inexperienced government agencies adopting new software that they don’t adequately understand, which is oversold by its vendor, and which ultimately fails in ways that affect the worst off most severely. Describing and requiring best practices for procurement of AI systems is one of the most significant things government departments can do right now.
This disarray was predictable. GPT-4, the most capable frontier AI model, had barely been released when the first regulatory proposals were brought forward. Regulating a field that has frequent research breakthroughs is hard. The ecosystem for deploying these systems is also fast-changing: Will AI companies operate like platforms, tending to monopoly or duopoly power? Or will there be a robust, competitive market for frontier AI models?
- Seth Lazar is a professor of philosophy at the Australian National University and a distinguished research fellow at the Oxford Institute for Ethics in AI
- Guardian