How self-driving cars predict behaviour of other road users: Video reveals vehicle plotting course of cyclist and pedestrians to avoid hitting them
- The footage shows exactly what self-driving cars ‘see’ as they navigate roads
- In the first video we see how it sees a cyclist overtaking a parked van and stops
- The second piece of footage shows how it deals with the unpredictable behaviour of children crossing the street
Footage shows what a self-driving car ‘sees’ as it navigates past cyclists, pedestrians and how it anticipates the unpredictable behaviour of children.
Using a test vehicle, the technology can predict the path of a cyclist and slow down while overtaking to overtake a parked van to allow them to pass.
The car perceives the cyclist’s path and recognise any point of collision, which it sees as a red dotted line, which resembles a bridge.
Formerly Google’s self-driving car project – Waymo has released two videos of what information its tech is getting back from moving through a complex environment.
Object detection is a two-part process, image classification and then image localisation.
Image classification is determining what objects are, like a person, while image localisation is providing their specific location, seen as the boxes in the footage.
The video shows an aerial view of the car driving down a road and how it differentiates between obstacles, for example, it sees a tall, red box as a cyclist.
In the second video, we see a Waymo self-driving Chrysler Pacifica minivan recognise a group of children crossing the street.
Self-driving car technology uses a combination of the cameras, radars and lidar – a detection system which uses light from a laser and ultrasonic sensors.
But according to Jalopnik, just observing objects doesn’t necessarily mean the car will steer itself around obstacles or even people.
It has to know what these objects are which is why the car’s ability to know the difference between a cyclist and a child is so important.
Cyclists and children tend to behave differently so using Deep Learning, the technology must learn their general behaviour to predict what’s going to happen.
For instance if a child is walking on a footpath but then sees an ice cream truck across the road and darts into the street, as sometimes children do.
Controlling a moving object through a complex environment full of other moving objects is no easy feat for technology.
As humans, we can see a pedestrian on a street corner and tell if that person is paying attention to their surroundings or not.
This thanks to an innate understanding of human behaviour, which enables us to adjust our focus and speed accordingly.
Machines don’t have many of the innate abilities that humans have that we use when driving.
Controlling the rate of motion, the direction of that motion, and adjusting the manner by which speed and direction changes are effected based on environmental conditions like road surface friction and weather/visibility has to be taught to the technology by Deep Learning.
HOW DO SELF-DRIVING CARS ‘SEE’?
Self-driving cars often use a combination of normal two-dimensional cameras and depth-sensing ‘LiDAR’ units to recognise the world around them.
However, others make use of visible light cameras that capture imagery of the roads and streets.
They are trained with a wealth of information and vast databases of hundreds of thousands of clips which are processed using artificial intelligence to accurately identify people, signs and hazards.
In LiDAR (light detection and ranging) scanning – which is used by Waymo – one or more lasers send out short pulses, which bounce back when they hit an obstacle.
These sensors constantly scan the surrounding areas looking for information, acting as the ‘eyes’ of the car.
While the units supply depth information, their low resolution makes it hard to detect small, faraway objects without help from a normal camera linked to it in real time.
In November last year Apple revealed details of its driverless car system that uses lasers to detect pedestrians and cyclists from a distance.
The Apple researchers said they were able to get ‘highly encouraging results’ in spotting pedestrians and cyclists with just LiDAR data.
They also wrote they were able to beat other approaches for detecting three-dimensional objects that use only LiDAR.
Other self-driving cars generally rely on a combination of cameras, sensors and lasers.
An example is Volvo’s self driving cars that rely on around 28 cameras, sensors and lasers.
A network of computers process information, which together with GPS, generates a real-time map of moving and stationary objects in the environment.
Twelve ultrasonic sensors around the car are used to identify objects close to the vehicle and support autonomous drive at low speeds.
A wave radar and camera placed on the windscreen reads traffic signs and the road’s curvature and can detect objects on the road such as other road users.
Four radars behind the front and rear bumpers also locate objects.
Two long-range radars on the bumper are used to detect fast-moving vehicles approaching from far behind, which is useful on motorways.
Four cameras – two on the wing mirrors, one on the grille and one on the rear bumper – monitor objects in close proximity to the vehicle and lane markings.
Source: Read Full Article