Self-driving cars got stuck in the slow lane

Share this:

Last updated on May 24th, 2022 at 02:19 am

How Self-driving cars got stuck in the slow lane: In January, Tesla CEO Elon Musk stated, “I would be shocked if we do not achieve full self-driving that is safer than a human this year.” This may sound familiar to anyone who follows Musk’s comments. He promised self-driving cars in 2020, saying, “There are no fundamental challenges.” He promised in 2019 that Teslas would be able to drive themselves by 2020, transforming into a fleet of 1 million “robotaxis.” Since 2014, he has made similar predictions every year.

How self-driving cars got stuck in the slow lane
How self-driving cars got stuck in the slow lane

Tesla began offering beta tests of its “Full Self-Driving” software (FSD) to about 60,000 Tesla owners in late 2020, after passing a safety test and paying $12,000. Customers will test the automated driver assistance technology in order to help refine it before it is released to the public. How Self-driving cars got stuck in the slow lane

According to Andrew Maynard, director of the Arizona State University risk innovation lab, Tesla is following the playbook of software companies with the beta rollout, “where the idea is you get people to iron out the kinks.” “The problem is that when software crashes, all you have to do is restart the computer.” It’s a little more serious when a car crashes.”

The autonomous vehicle (AV) industry is taking an unusual approach by putting new technology in the hands of untrained testers. Other companies,

such as Alphabet’s Waymo, GM’s Cruise, and Aurora, an autonomous vehicle startup, use safety operators to test technology on predetermined routes. While the move has bolstered Tesla’s populist credentials among fans, it has also proven to be risky in terms of reputation. A stream of videos documenting reckless-looking FSD behavior has racked up a lot of views online since the company put its technology in the hands of the people. Learn How Self-driving cars got stuck in the slow lane

A video shows a car in FSD mode veering sharply into oncoming traffic, causing the driver to swerve off the road and into a field. The one where a car repeatedly tries to turn onto train tracks and collides with pedestrians. Another shows the driver struggling to regain control of the vehicle after the system has instructed him to do so. In November of last year, the US National Highway Traffic Safety Administration (NHTSA) received a report of what appears to be the first FSD crash; no one was injured, but the vehicle was “severely damaged.” Check out How Self-driving cars got stuck in the slow lane

According to Taylor Ogan, a Tesla FSD owner and CEO of Snow Bull Capital, FSD is adept at driving on highways, where it’s “straightforward, literally.” He claims that the system is more unpredictable on more complex inner-city streets. The goal of continuous software updates is to eliminate bugs. The NHTSA, for example, ordered Tesla to prevent the system from performing illegal “rolling stops” (slowly passing through a stop sign without ever coming to a complete stop), while a “unexpected braking” problem is currently under investigation. “I haven’t even seen it get better,” Ogan says of his experience with FSD. It just does crazier things with more assurance.”

The “learner driver” metaphor, according to Maynard, works for some of FSD’s problems but breaks down when the technology engages in clearly non-human behavior. For example,

a disregard for driving dangerously close to pedestrians and a Tesla plowing into a bollard that went unnoticed by FSD. Similar issues have arisen with Tesla’s Autopilot software, which has been linked to at least 12 accidents (one death and 17 injuries) as a result of the vehicles’ inability to “see” parked emergency vehicles.

See also  Rotaeno: Official release of tangible rhythm action

There’s reason to believe that the videos that end up on the internet are the most flattering. Not only are the testers Tesla customers, but there is also an army of super-fans who act as a deterrent to sharing any negative information. Any reports of FSD misbehaving can elicit a barrage of criticism; any critical posts on the Tesla Motors Club, a forum for Tesla drivers, are invariably met with people blaming users for accidents or accusing them of wanting Tesla to fail. “People are afraid that Elon Musk will take away the FSD that they paid for, and that they will be attacked,” Ogan says.

According to Ed Niedermeyer, author of Ludicrous: The Unvarnished Story of Tesla Motors, who was “bombarded by an online militia” when he first started reporting on the company, this helps to shield Tesla from criticism. “This faith and sense of community… has been absolutely critical to Tesla’s survival throughout its history,” he says. He goes on to say that Musk can claim to be a year away from fully autonomous driving without losing the trust of his fans.

Tesla isn’t the only company that has missed its own self-imposed autonomous driving deadlines. Cruise, Waymo, Toyota, and Honda have all stated that fully self-driving cars will be available by 2020. Although progress has been made, it has not been on the scale that had been anticipated. What went wrong? Hope you like tis article Self-driving cars got stuck in the slow lane

“Number one,” says Matthew Avery, director of research at Thatcham Research, “is that this stuff is harder than manufacturers realized.”

While about 80% of self-driving is relatively simple – keeping the car on the right side of the road, avoiding collisions – the remaining 10% involves more difficult situations like roundabouts and complex junctions. “The last 10% is the most difficult,” Avery says. “That’s when you have, say, a cow standing in the middle of the road who refuses to move.”

The AV industry is stuck on the last 20%, particularly the last 10%, which deals with the perilous problem of “edge cases.” These are unusual or rare events that occur on the road, such as a ball bouncing across the street followed by a running child; complicated roadworks that require the car to mount the kerb to pass; or a group of protesters waving signs. Or there’s that stubborn cow.

Self-driving cars rely on a combination of machine learning software and basic coded rules like “always stop at a red light.” Machine-learning algorithms consume a large amount of data in order to “learn” how to drive safely. The car does not learn how to respond appropriately because edge cases are rare in such data.

The problem with edge cases is that they aren’t all that uncommon. “These kinds of edge cases may be rare for an individual driver, but when you average out all the drivers in the world, these kinds of edge cases happen very frequently to someone,” says Melanie Mitchell, a computer scientist and professor of complexity at the Santa Fe Institute.

While humans can generalize from one situation to the next, just because a self-driving system appears to “master” one situation does not mean it will be able to replicate it in slightly different circumstances. It’s a problem for which no solution has yet been found. “Trying to give AI systems common sense is a challenge because we don’t even know how it works in ourselves,” Mitchell says.

See also  Valorant: EP5 Dimension Released, New Map Pearl Appears

“A major part of real-world AI has to be solved to make unsupervised, generalised full self-driving work,” Musk tweeted in 2019. In the absence of a breakthrough in AI, autonomous vehicles that function as well as humans are unlikely to hit the market anytime soon. To partially get around this problem, other AV makers use high-definition maps – charting the lines of roads and pavements, as well as the placement of traffic signs and speed limits. However, these maps must be updated on a regular basis to keep up with ever-changing road conditions, and even then, there is no guarantee of accuracy.

The ultimate goal of autonomous vehicle designers is to develop vehicles that are safer than human-driven vehicles. In the United States, one person dies for every 100 miles driven by a human (including drunk driving). To prove that their technology was safer than a human, Koopman says AV developers would have to outperform this. However, he believes that industry-wide metrics such as disengagement data (how often a human must take control to avoid an accident) obfuscate the most critical aspects of AV safety.

“Safety isn’t about everything working perfectly all of the time. “The rare case where it doesn’t work properly is what safety is all about,” says Koopman. “It has to function 99.99999999% of the time.” The first few nines are still being worked on by AV companies, and there are a few more nines to go. It’s ten times harder to achieve for every nine.”

Some experts believe that in order to roll out self-driving vehicles,

AV manufacturers will not need to completely crack human-level intelligence. “I think self-driving cars would be very reliable and trustworthy if every car was a self-driving car, and the roads were all perfectly mapped, and there were no pedestrians around,” Mitchell says. “It’s just that there’s this entire ecosystem of humans and other human-driven cars that AI doesn’t yet have the intelligence to deal with.”

According to Philip Koopman, associate professor of electrical and computer engineering at Carnegie Mellon University, the edge-case problem is exacerbated by AV technology that acts “extremely confidently” when it’s wrong. “It’s terrible at recognizing when it doesn’t.” The dangers of this are evident in the Uber crash in which Elaine Herzberg was killed while walking her bicycle across a road in Arizona in 2018. The software flipped between different classifications of Herzberg’s form – “vehicle,” “bicycle,” and “other” – until 0.2 seconds before the crash, according to an interview with the safety operator behind the wheel at the time.

Self-driving cars can mostly function well in the right conditions,

such as quiet roads and pleasant weather. Waymo is able to operate a limited robotaxi service in Phoenix, Arizona, because of this. Despite the presence of a remote worker, this fleet has been involved in minor accidents, and one vehicle was repeatedly stumped by a set of traffic cones. (A Waymo executive claimed they were unaware that these incidents occurred more frequently than when a human driver was involved.) Self-driving cars got stuck in the slow lane

Despite the obstacles, the audiovisual industry continues to grow. The Uber crash jolted the industry into action; manufacturers halted trials as a result of negative press, and Arizona’s governor revoked Uber’s testing permit. The self-driving divisions of Uber and another ride-hailing company, Lyft, were then sold.

This year, however, has seen a return to arrogance – with more than $100 billion invested in the last ten years, the industry can ill afford to be complacent. People may be able to buy self-driving cars as early as 2024, according to carmakers General Motors and Geely, as well as AV company Mobileye. Both Cruise and Waymo plan to start operating commercial robtaxis in San Francisco this year. Aurora also hopes to have fully autonomous vehicles on the road in the United States within the next two to three years.

See also  Football in streaming: on which channels to watch the matches of the 2022/2023 season?

The lack of regulation governing this bold next step has some safety experts concerned.

According to Koopman, every company currently “basically gets one free crash,” and the regulatory system in the United States is based on trust in the AV maker until a serious accident occurs. He cites Uber and AV startup Pony.ai, whose driverless test permit in California was recently revoked following a serious collision involving one of its vehicles.

Tesla’s decision to share its technology with customers has had the unintended consequence of regulators taking notice. Due to its claim that its systems are more basic, Tesla has so far avoided the more stringent requirements of other AV makers, such as reporting crashes and system failures and using trained safety professionals as testers. However, the California Department of Motor Vehicles, the state’s autonomous driving regulator, is considering changing the system, owing to the potentially dangerous videos of the technology in action, as well as NHTSA investigations into Tesla.

The lack of regulation thus far demonstrates the absence of global consensus in this area. “Is the software going to mature fast enough that it’s both trusted and regulators give it the green light, before something really bad happens and pulls the rug out from under the entire enterprise?” asks Maynard.

… we have a small request. Every day, millions of people turn to the Guardian for open, independent, and high-quality news, and we now have readers in 180 countries.

We believe that everyone has a right to information based on science and truth, as well as analysis based on authority and integrity. That’s why we took a different approach: we made the decision to make our reporting accessible to all readers, regardless of where they live or how much they can afford to pay. More people will be better informed, united, and inspired to take meaningful action as a result of this.

A truth-seeking global news organization like the Guardian is critical in these perilous times. Because we don’t have any shareholders or a billionaire owner, our journalism is free of commercial and political influence, which sets us apart. Our independence allows us to investigate, challenge, and expose those in power at a time when it has never been more critical.

You can read also:

Share this:

Leave a Comment