Saturday, June 29, 2024
CommentaryWorldWorld News

Commentary: The real threat of AI may be the way governments choose to use it

MILITARISING AI

There are continual reports that the leading technological nations are entering into an AI arms race. No one state really started this race. Its development has been complex, and many groups – from inside and outside governments – have played a role.

During the Cold War, US intelligence agencies became interested in the use of artificial intelligence for surveillance, nuclear defence and for the automated interrogation of spies. It is therefore not surprising that in more recent years, the integration of AI into military capabilities has proceeded apace in other countries, such as the UK.

Automated technologies developed for use in the war on terror have fed into the development of powerful AI-based military capabilities, including AI-powered drones (unmanned aerial vehicles) that are being deployed in current conflict zones.

Russian President Vladimir Putin has declared that the country that leads in AI technology will rule the world. China has also declared its own intent to become an AI superpower.

SURVEILLANCE STATES

The other major concern here is the use of AI by governments in surveillance of their own societies. As governments have seen domestic threats to security develop, including from terrorism, they have increasingly deployed AI domestically to enhance the security of the state.

In China, this has been taken to extreme degrees, with the use of facial recognition technologies, social media algorithms, and internet censorship to control and surveil populations, including in Xinjiang where AI forms an integral part of the oppression of the Uyghur population.

But the West’s track record isn’t great either. In 2013, it was revealed that the US government had developed autonomous tools to collect and sift through huge amounts of data on people’s internet usage, ostensibly for counterterrorism.

It was also reported that the UK government had access to these tools. As AI develops, its use in surveillance by governments is a major concern to privacy campaigners.

Meanwhile, borders are policed by algorithms and facial recognition technologies, which are increasingly being deployed by domestic police forces. There are also wider concerns about “predictive policing”, the use of algorithms to predict crime hotspots (often in ethnic minority communities) which are then subject to extra policing effort.

These recent and current trends suggest governments may not be able to resist the temptation to use increasingly sophisticated AI in ways that create concerns around surveillance.

Leave a Reply

Your email address will not be published. Required fields are marked *