What are the challenges to the large-scale implementation of self-driving cars?

At CES in January, Mobileye released an unedited 25-minute road test video showing Mobileye’s self-driving car driving through the hustle and bustle of Jerusalem’s streets. Our primary purpose of releasing this video is to increase the transparency of autonomous driving, even if we also want to show the extraordinary technology of Mobileye, but more importantly, we want to show the world how self-driving cars work, because only then, Only self-driving cars can win the trust of society.

We chose to use a drone to track and film this drive, so that everyone can have a better understanding of the driving environment and better understand the logic behind the decision of the self-driving car. It should be noted that the only time there was manual intervention during the driving was when we changed the battery of the drone in about 20 minutes. In addition, we’ve equipped the video with narration to detail where Mobileye’s self-driving technology comes into play and how it handles the complexities of driving.

As far as we know, Mobileye’s solution is unique and one of the best among many players in the autonomous driving industry. Our goal is to solve the scale problem of self-driving cars, and to truly enter the future of self-driving cars, scale must also be achieved. We believe that self-driving cars will first be implemented in the form of shared vehicles, such as self-driving shuttles, before landing in consumer-grade self-driving passenger cars. From my point of view, the challenges of scaling autonomous vehicles focus on cost, adoption of HD maps, and safety, which I would like to point out here is that safety must determine hardware and software in a way that is not universally recognized. ‘s architecture.

Back in 2017, we published our definition of “safe” based on two observations. First, we need to formulate the definition of “careful” in a formal way at the outset of driving strategy planning, so as to eliminate errors in the decision-making process (such as traffic accidents caused by merging), and This will ultimately be used to achieve a balance between security and practicality.

Mobileye’s Responsibility Sensitive Safety Model (RSS) revolves around the actual actions of the driver, building metrics through concepts such as “Right of the Way is given, not contested” to allow autonomous vehicles to make safety decision. Of course, these parameters are developed by us in conjunction with governing bodies and standards bodies. After this, the RSS model assumes the worst-case scenario, within what can be assumed, what is the worst action other road users will make. This way, we no longer need to make predictions about the behavior of other road users. RSS’s theory proves that if the self-driving car follows the assumptions and behaviors dictated by the theory, the decision-making brain of the self-driving car can never cause an accident. Also since then, RSS has been promoted worldwide.

In late 2019, the Institute of Electrical and Electronics Engineers (IEEE) formed and appointed Jack Weast, a senior principal engineer at Intel Corporation, to lead a new working group aimed at developing decision-making tools for autonomous vehicles. Standard – IEEE 2846. The members of this working group are roughly representative of the entire autonomous driving industry. In my opinion, this sign is reassuring because it shows that we can build a key milestone through industry-wide cooperation to drive industry-wide progress, which in turn drives our own development.

The second observation in our published paper has profound implications for our system architecture. That is, even if the robot driver’s decision-making process refers to a safety model such as RSS, we may still face a situation where a traffic accident is caused by a malfunction of the perception system. Perception systems typically consist of cameras, radars and lidars, and use software to convert raw sensor data into a “model of the environment,” which includes, inter alia, the positions and speeds of other road users. Even if the odds are extremely small, there is a possibility that the perception system will ignore the presence of relevant objects, such as road users and inanimate obstacles, or miscalculate their dimensions, causing an accident.

To better understand the problem, let’s do a “rough” calculation. The cumulative driving distance in the United States is about 3.2 trillion kilometers each year, of which about 6 million accidents result in injuries. Assuming an average driving speed of 16 kilometers per hour, the mean time between failure (MTBF) is 50,000 hours. Let’s say our self-driving car’s MTBF is 10x, 100x, or 1,000x higher than a human’s MTBF (note that we’ve ruled out “as good as a human” because we have to do better) , assuming that we deploy 100,000 self-driving cars as self-driving shuttles for large-scale implementation (this figure is consistent with the figure proposed by the car-hailing manufacturer, and it is reasonable to use this figure to support related services in dozens of cities), Assuming that each self-driving shuttle runs for 5 hours a day, if the MTBF design is increased by 10 times, there will be about one traffic accident per day; if it is increased by 100 times, there will be an accident per week; if it is increased by 1000 times, it will be every There was only one accident per season.

From a societal point of view, it would be a huge achievement if the MTBF of all cars on the road increased by a factor of 10; but from a fleet operator’s point of view, both economically and in terms of public opinion, once a day Accidents are undoubtedly an unbearable outcome. Obviously, if our goal is to scale self-driving cars, then the lower bound is that MTBF must be improved by a factor of 1000. Even so, having an accident every quarter is nerve-racking.

The MTBF is increased by a factor of 1000, which is equivalent to 50 million hours of safe driving and about 800 million kilometers. It is cumbersome to collect such a large amount of data even to verify MTBF, let alone develop a perception system that can satisfy this MTBF.

This is the background of our preferred system architecture. In order for a perception system to achieve such an ambitious MTBF, redundancy needs to be introduced—specifically, system redundancy, not sensor redundancy within the system. It’s the equivalent of taking an iOS phone and an Android phone with you and asking yourself what is the probability of them crashing at the same time? The answer to this question is roughly the product of the probability of each device crashing on its own. Likewise, in the self-driving car space, if we build a complete end-to-end autonomous driving based only on cameras, and then use radar/lidar to build a completely independent function, then we have two independent redundant subsystems. It’s like carrying around two smartphones with different systems, and the chances of both systems experiencing sensory failures at the same time are very small. This is very different from how other players in the self-driving car industry focus on “processing perception systems for sensor fusion”.

However, building a camera-only self-driving car is much more difficult than building a self-driving car that fuses all sensor data at the same time. The camera is notoriously difficult to exploit because its access to depth (extent) is indirect, based on cues such as perspective, shadows, motion, and geometry. At CES this year, I also detailed how Mobileye is building a camera-only (Vision Only) self-driving car system.

Let’s go back to the video released today, which is a good demonstration of the performance of our Vision Only subsystem. In the video, it can be seen that there is neither radar nor lidar in the car, in fact, the car is perceptually supported by 8 long-range cameras and 4 parking cameras, the information from these cameras is fed into the only two EyeQ ?5 chip-supported computing system. In addition, self-driving cars also need to balance agility and safety, and the balance between the two will be achieved through RSS. The streets of Jerusalem are notoriously challenging because other road users tend to be very egoistic, which also poses great challenges for the decision-making models of autonomous vehicles.

In the future, we will continue to share Mobileye’s progress and views on promoting the large-scale implementation of autonomous vehicles, so stay tuned.

The Links:   LQ104V1DG52 DMF-50773-NF-FW BUYPART