Worries about artificial intelligence have suddenly seized Washington: The White House just hauled in a roster of tech CEO’s to press them on the safety of their new AI platforms, and Congress is scrambling for ways to regulate a possibly disruptive and risky new technology.

There are a lot of immediate concerns about the latest generation of AI tools — they could accelerate misinformation, job disruption and hidden unfairness. But one concern hovers over the rest, both for its scale and the difficulty of fixing it: the idea that a super-intelligent machine might quickly start working against its human creators.

It sounds fanciful, but many experts on global risk believe that a powerful, uncontrolled AI is the single most likely way humanity could wipe itself out.

At the heart of the threat is what’s called the “alignment problem” — the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there aren’t obvious policy solutions to alignment. It’s a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.

Stay up to date with us


Get weekly Canon roundups straight to your inbox