Doctors and public health experts call AI R&D an 'existential threat'

‘AI is an existential threat to humanity’: Now doctors and public health experts across four continents issue fresh call for artificial intelligence to be halted – warning it could be used to make weapons of mass destruction

  • Experts say advanced AI could ‘learn to bypass any constraints in its code’ 
  • Their dire warnings were published in the British Medical Journal Global Health
  • READ MORE: Silicon Valley civil war as Elon Musk and Bill Gates battle over AI

Medical experts have issued a fresh call to halt the development of artificial intelligence (AI), warning it poses an ‘existential threat’ to people.

A team of five doctors and global health policy experts from across four continents said there were three ways in which the tech could wipe out humans.

First is the risk that AI will help amplify authoritarian tactics like surveillance and disinformation. ‘The ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras,’ they say, could make it easier for authoritarian or totalitarian regimes to come to power and stay in power.

Second, the group warns that AI can accelerate mass murder via the expanded use of Lethal Autonomous Weapon Systems (LAWS). 

And, lastly, the health experts expressed worry over the potential for severe economic devastation and human misery, as untold millions lose their livelihoods to those hard-working bots. ‘Projections of the speed and scale of job losses due to AI-driven automation,’ according to the authors, ‘range from tens to hundreds of millions over the coming decade.’

There commentary comes only weeks after over a thousand scientists, including John Hopfield from Princeton and Rachel Branson from the Bulletin of Atomic Scientists, signed a letter calling for a halt to AI research over similar concerns. 

The fears of AI come as experts predict it will achieve singularity by 2045, which is when the technology surpasses human intelligence to which we cannot control it

Of course, today’s text-based AI resources, like OpenAI’s ChatGPT, don’t exactly pose the apocalyptic threats that these health policy professionals have in mind. 

The experts – led by a physician with the International Institute for Global Health at United Nations University – said their most dire warnings applied to a highly advanced, and still theoretical category of artificial intelligence: self-improving general-purpose AI, or AGI.

AGI would be more capable of truly learning and modifying its own code to perform the wide range of tasks that only humans are capable of today. 

In their commentary, the health experts argue that such an AGI ‘could theoretically learn to bypass any constraints in its code and start developing its own purposes.’

‘There are scenarios where AGI could present a threat to humans, and possibly an existential threat,’ the experts wrote, ‘by intentionally or unintentionally causing harm directly or indirectly, by attacking or subjugating humans or by disrupting the systems or using up resources we depend on.’

While such a threat is likely to be decades away, the health policy experts’ commentary, published today in the British Medical Association journal BMJ Global Health, unpacked the myriad possibilities for abuse of today’s level of AI technology. 

Describing threats to ‘democracy, liberty and privacy,’ the authors described how governments and other large institutions might automate the complex tasks of mass surveillance and online digital disinformation programs to AI.

In the former case, they cited China’s Social Credit System as one example of a state tool to ‘control and oppress’ human populations. 

‘When combined with the rapidly improving ability to distort or misrepresent reality with deep fakes,’ the authors wrote in the latter case, ‘AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts.’

READ MORE: White House Reveals AI Control Plan, and Kamala is the Czar 

 The Biden Administration said the technology was ‘one of the most powerful’ of our time, adding: ‘But in order to seize the opportunities it presents, we must first mitigate its risks.’

Describing threats posed to ‘peace and public safety,’ the authors detailed the development of Lethal Autonomous Weapon Systems (LAWS), killing machines like theT-800 Endoskeleton of the Terminator films. LAWS, these experts say, would be capable of locating, selecting, and engaging human targets all on their own.

‘Such weapons,’ they write, ‘could be cheaply mass-produced and relatively easily set up to kill at an industrial scale. For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill.’

The researchers’ last broad threat category, ‘threats to work and livelihoods,’ drew attention to the likelihood of impoverishment and misery as ‘tens to hundreds of millions’ lose their jobs to the ‘widespread deployment of AI technology.’

‘While there would be many benefits from ending work that is repetitive, dangerous and unpleasant,’ these medical professionals wrote, ‘we already know that unemployment is strongly associated with adverse health outcomes and behaviour,’

Perhaps most alarming, nearly one-in-five professional AI experts appear to agree with them. 

The authors cited a survey of members of the AI society in which 18% of participants stated that they believed development of advanced AGI would be existentially catastrophic for humanity. 

Half of the members of the AI society surveyed predicted that AGI would likely start knocking on our door sometime between 2040 and 2065.

Researchers in Silicon Valley signed a letter issuing similar warnings last month. Their ranks included DeepAI founder Kevin Baragona, who told DailyMail.com: ‘It’s almost akin to a war between chimps and humans.

The humans obviously win since we’re far smarter and can leverage more advanced technology to defeat them.

‘If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it.’

Source: Read Full Article