Machine learning and artificial intelligence already influence our lives to varying degrees. AI kicks in every time you unlock your phone with facial recognition and plays a big role in the news and advertising you see on social media. And though these influences might seem trivial at first glance, many experts fear they are a sign bigger problems that lie ahead.
In 2015, many prominent scientists and tech moguls signed an open letter calling for more research into the impacts of AI.
The signatories included Professor Stephen Hawking and SpaceX CEO Elon Musk who has recently said AI could overtake us in as little as five years.
The danger from AI is twofold, according to Dr Krzysztof Walas from the Institute of Robotics and Machine Intelligence at the Poznań Politechnik in Poland.
One the one hand, there is a risk machines could become smart enough to push humans towards acts of evil.
On the other hand, is the less likely theoretical scenario machines could carry out physical acts against humans.
In both cases, the expert has argued AI technology needs to be regulated and taught right from wrong.
He said: “I see here two possibilities. First of all, these systems can have a strong influence on people through non-physical means.
“They could manipulate, for example, with the aid of social media, even on a mass scale.
“This way they could, for example, persuade others towards hating their neighbours or another ethnic group.
There are ideas to teach them ethical things
Dr Krzysztof Walas, Poznań Politechnik
“Let’s imagine this scenario in a country where access to weaponry is easy.
“A smart program would theoretically manipulate some people in a way that they do something bad.
“A computer might not have access to, for example, weapons, but to people who do use them.”
In the second scenario, which loosely resembles something out of the Terminator film franchise, machines themselves could cause harm.
Researchers release theatre play written entirely by MACHINES [INSIGHT]
Has the Bible warned us against the rise of AI? [ANALYSIS]
AI news: 60 percent of Brits STILL fear autonomous AI [REPORT]
Dr Walas said: “The second option concerns physical acts by robots connected to the web.
“With the aid of the internet you could, for example, change in them algorithms and these machines could do something that would harm people.
“I sometimes think that if a completely autonomous car kidnapped me, there is nothing I could do.
“I’m drawing up, of course, dark scenarios here.
“We are presently talking about theoretical, and not possible threats.”
the ability to learn despicable behaviours from the internet.
Most famously, Microsft’s TayTweets chatbot on Twitter was shut down after it was taught shocking racist language.
Dr Walas said: “There is a lot of hate on the internet. What would happen if an intelligent system learned to interact with people purely by analysing internet forums?”
The expert suggested work needs to be done to teach AI ethical choices from the get-go.
Teaching machines to distinguish between right and wrong could be key towards preventing disaster further down the lines.
Dr Walas said: “There are ideas to teach them ethical things, to show them choices that lead towards happiness, to good.
“Then, even when they surpass human capabilities, maybe it will be rewarding for them to work for the betterment of humanity, and not something else that might harm us.”
However, Dr Walas said it is hard to tell when AI will become developed enough to outsmart its creators.
Source: Read Full Article