Apologies for not posting this sooner. Maybe we make this a two-parter if we aren’t sick of the topic by the end of this week.
First of all this article by Henry Kissinger in The Atlantic which raises concerns.
But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.
This is a fairly long paper called Cyber,Nano and AGI risks
The article is about existential risks. Implicit in the article is the idea that nanotechnology and Artificial Intelligence are existential risks, unlike say climate change, which is not an existential risk. Climate Change is not an existential risk because its effects will appear over a period of decades, if not centuries giving people time to adapt. And if we can’t adapt, then even in the worst-case scenarios where if it kills a couple of billion people, that’s only 10-20% of the human population. These other things are existential risks because rise suddenly, over a period of weeks, and might kill everyone. Cyber is included because if we can’t guard against cyber risk, then we can’t guard against these others.
Civilization, taken as a whole, is already a superintelligence. It is vastly more intelligent than any individual, it is already composed of both human and machine intelligences, and its intelligence is already increasing at an exponentially accelerating rate.
The paper points out that before AI has reached general superintelligence it would have achieved super-abilty in narrow domains, such as hacking computer systems.
Finally, this paper, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” describes the many ways in which AI could be used for malicious purposes.