A beta version of Tesla’s “Full Self Driving” Autopilot update has begun rolling out to certain users. And man, if you thought “Full Self Driving” was even close to a reality, this video of the system in action will certainly relieve you of that notion. It is perhaps the best comprehensive video at illustrating just how morally dubious, technologically limited, and potentially dangerous Autopilot’s “Full Self Driving” beta program is.
In a 13-minute video posted to YouTube by user “AI Addict,” we see a Model 3 with FSD Beta 8.2 fumbling its way around Oakland. It appears hapless and utterly confused at all times, never passably imitating a human driver. Early in the video, the front-seat passenger remarks at the car’s correct decision to pass a bunch of double-parked cars rather than waiting behind them—but the moment of praise is cut short when the car parks itself right on the center line while trying to get into a left-turn lane.
That’s because—like all semi-autonomous systems on sale today—Tesla’s “Full Self Driving” and “Autopilot” systems are not, in fact, fully autonomous. They require constant human supervision and split-second intervention. And now that the latest beta version of the software is out in the wild, it seems to require more attention than ever.
Quite quickly, the video moves from “embarrassing mistakes” to “extremely risky, potentially harmful driving.” In autonomous mode, the Tesla breaks a variety of traffic laws, starting with a last-minute attempt to cross a hard line and execute an illegal lane change. It then attempts to make a left turn next to another car, only to give up midway through the intersection and disengage.
It goes on to take another turn far too wide, landing it in the oncoming lane and requiring driver intervention. Shortly thereafter, it crosses into the oncoming lane again on a straight stretch of road with bikers and oncoming traffic. It then drunkenly stumbles through an intersection and once again requires driver intervention to make it through. While making an unprotected left after a stop sign, it slows down before the turn and chills in the pathway of oncoming cars that have to brake to avoid hitting it.
The video’s not even halfway done, but the litany of errors continues with another random disengagement. The Tesla attempts to make a right turn at a red light where that’s prohibited, once again nearly breaking the law and requiring the driver to actively prevent it from doing something. It randomly stops in the middle of the road, proceeds straight through a turn-only lane, stops behind a parked car, and eventually almost slams into a curb while making a turn. After holding up traffic to creep around a stopped car, it confidently drives directly into the oncoming lane before realizing its mistake and disengaging. Another traffic violation on the books—and yet another moment where the befuddled car just gives up and leaves it to the human driver to sort out the mess.
The Tesla’s software is defeated by cars stopped in the roadway and an intersection where it clearly has the right of way. Then comes another near collision. This time, the Tesla arrives at an intersection where it has a stop sign and cross traffic doesn’t. It proceeds with two cars incoming, the first car narrowly passing the car’s front bumper and the trailing car braking to avoid T-boning the Model 3. It is absolutely unbelievable and indefensible that the driver, who is supposed to be monitoring the car to ensure safe operation, did not intervene there. It’s even wilder that this software is available to the public.
But that isn’t the end of the video. To round it out, the Model 3 nearly slams into a Camry that has the right of way while trying to negotiate a kink in the road. Once it gets through that intersection, it drives straight for a fence and nearly plows directly into it. Both of these incidents required driver intervention to avoid.
To be sure, nobody has solved autonomous driving. It is a challenging problem that some experts say will only be solved with highly advanced artificial intelligence. Tesla’s software clearly does a decent job of identifying cars, stop signs, pedestrians, bikes, traffic lights, and other basic obstacles. Yet to think this constitutes anything close to “full self-driving” is ludicrous. There’s nothing wrong with having limited capabilities, but Tesla stands alone in its inability to acknowledge its own shortcomings.
When technology is immature, the natural reaction is to continue working on it until it’s ironed out. Tesla has opted against that strategy here, instead choosing to sell software it knows is incomplete, charging a substantial premium, and hoping that those who buy it have the nuanced, advanced understanding of its limitations—and the ability and responsibility to jump in and save it when it inevitably gets baffled. In short, every Tesla owner who purchases “Full Self-Driving” is serving as an unpaid safety supervisor, conducting research on Tesla’s behalf. Perhaps more damning, the company takes no responsibility for its actions and leaves it up to driver discretion to decide when and where to test it out.
Source: Read Full Article