September saw the release of big news on robotic vehicles. Uber launched public test rides of its automated fleet in Pittsburgh; Elon Musk upgraded improvements to Tesla’s Autopilot driving mode; and the US Department of Transportation (DOT) through the National Highway Traffic Safety Administration (NHTSA) released necessary policies for the deployment of the technology to ensure public safety
AVs require an advanced range of technologies – millimeter-wave radars, cameras, ultrasonic sensors, lidar scanners, GPS technology, vehicle-to-vehicle and vehicle-to-infrastructure connectivity, and algorithms for Artificial Intelligence and Machine Learning to work seamlessly together.
These were possible due to global partnerships and investments related to autonomous cars. Numerous startups, auto and tech companies including Google, Drive.ai, Ford, General Motors, Toyota, nuTonomy, Baidu and Delphi all together are generating millions of test miles in automated fleets. The cars are loaded with latest LiDAR, radar and camera-based sensor systems from suppliers such as Velodyne and Mobileye and packing massive data-processing capabilities due to Nvidia and Intel.
AVs are expected to reduce road congestion, bringing wide-range work and personal benefits, as well as less carbon emissions and gaining from drop in vehicle accidents and deaths. Nonetheless, it comes with challenges of how can we quantify the risk or risk-reduction of autonomous vehicles, and manage it?
Should the car slam into a wall killing the passengers, to save pedestrians? Or vice versa? What about if one group is elderly, and the other teenagers? How wide a berth does an autonomous vehicle decide to give to a bicyclist on a narrow two-lane street? Does it briefly move into oncoming traffic to maneuver around the rider?
Current vehicles from automakers including Mercedes-Benz and Tesla, with its semi-automated Autopilot feature, to prototype robotic cars from Google, Uber and Volvo are at Level 2, with varying levels of advanced driver-assist functions.
At this level, drivers must be alert and ready to take control when situations are too much for the car to handle. Adaptive cruise control, lane-keep assist, automated braking, rear- and forward-facing cameras and vehicle sensors to monitor the location of other vehicles and objects on the road are helpful driver-assist technologies and increasingly standard features on new vehicles, that prevent tail-end shunts and weather (mainly visibility) related crashes. The computer not only brakes faster, it utilizes detection methods beyond the capability of the human eye. Warnings about poor driving conditions, road work and tailback alerts, sharp bend ahead can be sent to drivers using connected vehicle technology as can congestion reducing information such as live traffic updates, “speed to green” and parking availability, but they can’t be relied on under all circumstances. Though Tesla’s Autopilot allows automated lane changes and receives more detailed information about a vehicle’s surroundings from its radar sensors, it’s still at Level 2.
In recent times, crash avoidance technology as ADAS is available which is effective, cheaper and socially desirable to utilize and prevent human drivers from making the mistakes that cause crashes.
Volvo’s auto brake automatically applies the brakes if the driver tries to turn in front of an oncoming vehicle at intersections. This could reduce crashes at intersections and prevent collisions with vehicles, without reference to maps, signage or delineation.
Toyota, Renault-Nissan & Ford plans a Level 4 autonomous vehicle by 2020s. Level 4 vehicles should have the capability to drive themselves almost all the time, excluding extreme weather, bad road conditions or when mapping data is insufficient.
To get to Level 4 or 5 capability, engineers are trying to create sensors that are both more powerful and cheaper than current versions. Through the process of deep learning algorithms, automotive AI systems are being trained in labs around the world to recognize countless images of potential road and traffic conditions, pedestrian behavior, traffic signs, signals and circumstances. They are also accumulating crash data to improve the performance of autonomous-vehicle fleets and is constantly learning from real-world driving data, getting better at recognizing things around it and deciding how to respond (steering, braking) to make the fleets smarter.
Yet designing AI that can assess conditions not seen before, uncommon “edge” cases that require human-like reasoning capabilities, remains a significant hurdle.
Driverless vehicles will need very high definition and constantly updated digital maps and a higher consistency of markings and signage on roads. The upgrading and ongoing infrastructure and mapping costs needs to be considered as well. Any obscured sign or ‘inadequate’ demarcation could raise liability issues in the event of a collision.
Cyber-security risks remains a major concern as hackers can threat the use of wireless connections essential for driverless cars to access cloud-based data and mapping networks.
Now that car manufacturers are working on upgrading AI and enhancing fleet performance, will different car of different manufacturers will be connected to each other, and share the data they accumulate and learn?
On 17th September 2016, “Federal Automated Vehicles Policy” was released. The new federal framework sets useful guidelines as recommendations for states and companies seeking to test advanced vehicles on public roads.
One key element suggested by the guidelines’ is that automakers share data on driving incidents, including crashes, with other manufacturers. The purpose of sharing crash data is to make driverless cars safer, which is the primary objective for the Department of Transportation.
A good example of how crash data sharing is useful can be drawn from the fatal Tesla Autopilot crash in the spring. A Tesla Model S operating in “self-driving” mode slammed into a tractor-trailer at high speed, instantly killing the driver, a forty-year-old Navy veteran. Later, the data collected from video footage and other sensor information, such as radar and sonar logs, were examined by engineers. Using that data, Tesla has upgraded its software so that Autopilot-enabled cars won’t repeat the mistake. That software update was sent to every vehicle in the Tesla Autopilot fleet, new or old, but, significantly, that information was not shared with Google, GM, Uber, or other companies experimenting with driverless cars. The shared data would help to accelerate knowledge and understanding of HAV performance, and could be used to enhance the safety of [autonomous-vehicle] systems, all over, devoid of the vehicle brand.
The engineers and machine-learning algorithms are limited by the data available. If the data remains accessible only to the manufacturer, then autonomous vehicles designed by Honda, Chrysler, BMW, and GM will have to make the same mistake before fleet learning is adopted by all manufacturers. hold.
Therefore, it is safe to say that software updates to autonomous-driving systems will reach a company’s own vehicles, but not those of competitors as different autonomous vehicles use different sensors, so the crash data from one manufacturer wouldn’t necessarily be an exact fit for another manufacturer’s software. Tesla doesn’t use a laser-based system called LIDAR that is common among other manufacturers. Engineers can find ways to translate and adapt crash data to be relevant to their own systems, and to extract valuable safety lessons.
Car manufacturers should share their learning with each other on a real time basis, otherwise, driverless cars will wait for a long time to get the green light to move forward. One can hope, the regulations create the possibility that the carmakers might have to give their data, a key currency of the information age, to their competitors.