Researchers teach autonomous car how safely avoid ‘selfish’ motorists by using social psychology to label them ‘cooperative’ or ‘egotistic’
- The system uses social values to help predict other motorists behavior
- By judging their position on the road it assigns their likelihood of aggressiveness
- The model was trained using other motorists behavior
- Researchers want to apply the system to pedestrians as well
Researchers at MIT say they’ve made progress in helping self-driving vehicles drive harmoniously with aggressive motorists.
The system, developed by researchers by MIT’s Computer Science and Artificial Intelligence Laboratory, uses social psychology tools to classify drivers as either selfish or selfless.
‘Working with and around humans means figuring out their intentions to better understand their behavior,’ says Wilko Schwarting, lead author on the new paper that will be published this week in the Proceedings of the National Academy of Sciences.
Self-driving cars like Waymo’s (pictured above) could use improved algorithms to help avoid accidents and help achieve full autonomy
‘People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper, we sought to understand if this was something we could actually quantify.’
Specifically, the system – which was trained by observing human driving behavior – uses social value orientation (SVO) to assign opposing cars on the road with a likelihood of being ‘cooperative, altruistic, or egoistic.’
By taking into account a driver’s position on the road and their likelihood of executing an aggressive or passive move, the researchers say the system was able to predict the behavior of other cars on the road with 25 percent more accuracy.
The system has yet to be tested on the road until further testing determines whether it can safely be implemented.
Tesla is among the players in the self-driving race and has been featuring its ‘auto-pilot- in cars that helps take over crucial driving functions
While somewhat unproven in a real-world setting, researchers say the system brings self-driving algorithms one step closer to being able to effectively negotiate with humans on the road – a major obstacle toward full autonomy.
The next step will be to use the system to help model the behavior or ‘pedestrians, bicyclists, and other agents,’ which may help to avoid potentially fatal accidents like those experienced by Uber’s self-driving cars.
This month, Uber’ self-driving system was found responsible for hitting and killing a pedestrian last year after failing to identify the jaywalker.
HOW DO SELF-DRIVING CARS ‘SEE’?
Self-driving cars often use a combination of normal two-dimensional cameras and depth-sensing ‘LiDAR’ units to recognise the world around them.
However, others make use of visible light cameras that capture imagery of the roads and streets.
They are trained with a wealth of information and vast databases of hundreds of thousands of clips which are processed using artificial intelligence to accurately identify people, signs and hazards.
In LiDAR (light detection and ranging) scanning – which is used by Waymo – one or more lasers send out short pulses, which bounce back when they hit an obstacle.
These sensors constantly scan the surrounding areas looking for information, acting as the ‘eyes’ of the car.
While the units supply depth information, their low resolution makes it hard to detect small, faraway objects without help from a normal camera linked to it in real time.
In November last year Apple revealed details of its driverless car system that uses lasers to detect pedestrians and cyclists from a distance.
The Apple researchers said they were able to get ‘highly encouraging results’ in spotting pedestrians and cyclists with just LiDAR data.
They also wrote they were able to beat other approaches for detecting three-dimensional objects that use only LiDAR.
Other self-driving cars generally rely on a combination of cameras, sensors and lasers.
An example is Volvo’s self driving cars that rely on around 28 cameras, sensors and lasers.
A network of computers process information, which together with GPS, generates a real-time map of moving and stationary objects in the environment.
Twelve ultrasonic sensors around the car are used to identify objects close to the vehicle and support autonomous drive at low speeds.
A wave radar and camera placed on the windscreen reads traffic signs and the road’s curvature and can detect objects on the road such as other road users.
Four radars behind the front and rear bumpers also locate objects.
Two long-range radars on the bumper are used to detect fast-moving vehicles approaching from far behind, which is useful on motorways.
Four cameras – two on the wing mirrors, one on the grille and one on the rear bumper – monitor objects in close proximity to the vehicle and lane markings.
Source: Read Full Article