What about AI-enabled surveillance?
One harmful effect of AI is its potential use in surveillance and control.
Some AI ethics-focused groups and thinkers have raised concerns about contemporary AI applications such as facial recognition and predictive policing being used to exert social control, especially targeting marginalized communities. These capabilities are expected to increase in the future, and some civil liberties organizations such as the EFF have been reporting on these uses of AI in both democracies and autocratic regimes.
Security expert Bruce Schneier has argued that AI is enabling a shift from general surveillance (e.g., pervasive use of CCTV) to personalized surveillance of any citizen. For instance, AI can quickly and cheaply search all phone calls and CCTV footage in a city to form a detailed profile on one individual, which was previously only possible through laborious human effort.1
In the future, more powerful AI surveillance, along with other AI-enabled technologies like autonomous weapons, might allow authoritarian or totalitarian states to make dissent virtually impossible, potentially enabling the rise of a stable global totalitarian state.
As of early 2024, access to the most advanced models is moderated through API access by Western corporations, which allows these corporations to restrict uses of their models that they do not condone. These corporations are incentivized not to collaborate with totalitarian governments, or to authorize use of their models by projects perceived as authoritarian, lest they face public backlash. As capabilities increase and powerful models become more accessible to smaller actors2
A number of prominent researchers who mainly focus on risks from misalignment (rather than misuse) nevertheless view AI-enabled surveillance as one of the most salient risks that could arise from near-term AI:
-
Daniel Kokotajlo has speculated that LLMs
could be used as powerful persuasion tools to disproportionately aid authoritarian regimes.Large language modelView full definitionAn AI model that takes in some text and predicts how the text is most likely to continue.
-
Nick Bostrom
has discussed the potential incentives for widespread surveillance systems augmented by AI, based on state responses to concerns about living in an extremely risky and “vulnerable” world.Nick BostromPhilosopher who has done research on existential risk from AI and other causes. Formerly head of FHI at Oxford, which he founded. Author of the 2014 book Superintelligence: Paths, Dangers, Strategies.
-
Buck Shlegeris claims that risks of AI-enabled totalitarianism are “at least 10% as important as the risks [he] works on as an AI alignment
researcher”. -
Richard Ngo has claimed that outsourcing the task of maintaining control (e.g. through surveillance) to AI makes it easier to consolidate power, which in the limit leads to authoritarianism.
While it is important to undertake measures to mitigate these kinds of risks of AI misuse, it's not sufficient; even well-intentioned actors have the potential to accidentally pose an existential risk A risk of human extinction or the destruction of humanity’s long-term potential.
Schneier calls this a shift from “mass surveillance” to “mass spying”, although other authors do not separate these two categories. ↩︎
This could happen for instance when powerful models are open-sourced. ↩︎