Modern robots do much more than work on car-assembly lines.
They provide entertainment, care for the elderly, even patrol city streets. Robots are here to stay, and it’s only a matter of time before they become such a widespread and indispensable part of our daily activities that we will see them everywhere.
They’ll be in courtrooms, assessing who is more likely to attempt to flee and what kind of sentence they should receive. They’ll be at banks, deciding if you qualify for a loan. And at HR departments, helping companies decide if they really need to hire you.
With great power comes great responsibility, so this question should be asked sooner rather than later: Is artificial intelligence (AI), the “brain” behind the machine, up to the task? And as this very same AI gets more complex and “humanlike,” what logical-cognitive “disorders” may creep in and disrupt its decision-making processes?
AI is as good as its creators, who, of course, can make mistakes: Modern GPS algorithms provide inaccurate or suboptimal information, putting lives at risk; chatbot AIs often say weird and crazy stuff; and security robots run people over. In the future, we want fewer of these incidents, and to prevent them, it’s important to understand how robots “think” — how they learn and act based on this information.
Introducing the robot psychologist, or robopsychologist. That is a person who creates a bridge between human and AI learning and interaction. He helps the AI to acquire information in a way that will enable better decision making. He also analyses contemporary learning and decision-making algorithms and adjusts them to function better in real-world scenarios.
Much of what is already used in conventional psychology can be applied to work with AI, and this goes especially well for learning. Like a human baby, a newly created AI is a blank slate. Its learning algorithms provide it with tools it requires to acquire new information, similar to mechanisms babies use to observe and learn new behavior.
However, just like us, AI can run into difficulties if left unattended. These result in learning and decision-making errors — issues that artificial intelligence can’t resolve on its own. At that point, it’s a robopsychologist’s job to adjust a robot’s learning pattern to allow for correction and continuation of the learning process.
The entire procedure can be reversed, and AI learning models could be used to enhance and test procedures used on humans. For example, new ways to acquire a skill or learn a language could be converted into an algorithm, which would be fed to the AI and then observed and corrected for added efficiency.
At first, this new profession may seem far-fetched, but it’s actually practical and useful. It stems from the fact that to create a viable AI that can think, learn and make informed choices, one needs to understand the platform that is already capable of all these things: human psychology.
Having psychologists who understand AI development and actively work on it will help companies develop machines that are not only more efficient but also more human-like. This unique trait will also make them more appealing to the consumer since they will be able to interact with AI more easily. Finally, the robopsychologist will have one more important job: to teach ethics to AI in a way the machine can understand, “internalize” and prioritize during its decision-making process.
This new, yet logical, evolution in human understanding of AI is paving the way to more natural, seamless robot integration into our everyday activities. Just like robotics and other segments of AI development, this new branch of psychology still has a way to go before we can hope to have someone like Walter help us around the house.