• Richa Taldar

Autonomous Cars: The Future of Mobility

With every mile being driven by an autonomous car, we are getting closer to the future of mobility. For the automobile industry, mobility is a user-centric concept that allows individuals to move or to be moved freely with the help of transportation products and services. Autonomous cars were once considered a moonshot in the automobile domain. Today, witnessing a robocar on the road, testing its driving environment, is becoming commonplace in some cities.

The Autonomous Vehicle (AV) industry is growing at a rapid pace. According to forecasts made by Garter, approximately 745,705 Autonomous-Ready vehicles will be added to the global market in 2023. In terms of volume, this will correspond to a more than 400% increase when compared with 137,129 units added in 2018. With billions of dollars being invested, many companies are aggressively participating in the race to achieve complete autonomy. The technology industry is watching excitedly as we move toward an utterly driverless experience.


Levels of Autonomy


The Levels of Autonomy are defined by The Society of Automotive Engineers (SAE). These levels range from “no automation” to “full automation” and are also adopted as a standard terminology by the U.S. Department of Transportation.

To qualify for being fully autonomous, an AV progresses through six different driving assistance levels:

Level 0: identifies all cars that have no driving automation. Most of the vehicles on the road today belong to this category.

Level 1: includes some level of automation in assisting the driver’s capabilities. The driver is still the sole decision-maker, and the automated system assists in activities such as cruise control, parking assistance, lane departure warning system, etc.

Level 2: includes automation capabilities that allow the automated system to take control of the accelerating, braking, and steering of the vehicle. However, this level demands the driver’s full attention on the road as the AV is not mature enough to make independent decisions.

Level 3: is conditional automation, where the driver can be inattentive intermittently. The automated system will take control of some conditions as it monitors the driving environment.

Level 4: introduces high automation. This level does not demand driver attention in most of the cases but is limited to a geofenced area.

Level 5: is the ultimate goal: cars without a steering wheel or acceleration/braking pedals, and that are capable of navigating entirely on their own. This level provides complete automation under all roadway and environmental conditions.

Six Levels of Driving Assistance (Source)



Components of a typical Autonomous Car


Camera

Cameras are used in AVs to capture the surroundings. The captured data allows an AV to create a visual representation of the surrounding driving environment. An AV is equipped with multiple cameras such as in the front, rear, right and left to offer a 360-degree view of the surrounding environment. Cameras act like the eyes of an AV, and they operate at wavelengths within the visible light spectrum. But if the driving environment has low-light and/or poor weather conditions, the ambient light will not be sufficient for the cameras to provide an informative picture for the AV. In such cases, to get a sense of the environment, other AV components are used along with cameras.


Radar

Created in 1935, Radio Detection and Ranging (RADAR) was one of the earliest technologies used to detect the relative distance between objects. A RADAR system sends out radio waves and analyzes the time it takes for the radio waves to bounce from the target object to a sensor. This time is used to determine the velocity and location of objects in real-time.

Like cameras, RADAR transmitters emit radio waves in all directions. Unlike cameras, RADAR is not affected by conditions such as low-light and poor weather has only minimal impact.

LIDAR

Along with cameras and RADAR, a Light Imaging Detection and Ranging (LIDAR) system is an essential piece of detection hardware.

LIDAR provides an AV with a unique understanding of the dynamically changing environment allowing it to complement the components discussed above. LIDAR is an active sensing technology that provides 3D input to an AV. It functions by emitting thousands of laser beams each second in the 360-degree surroundings of an AV. A tracker analyzes the time it takes for the light to bounce back after colliding from the surrounding objects. These data are then used to create very precise 3D representations of the driving environment. 3D representations allow an AV to know the precise distance of any surrounding objects in a dynamically changing driving environment. It also enables an AV to determine the shape and depth of other cars, pedestrians, objects, and the overall road geography of the driving environment.

Except for Tesla, all the major players (Waymo, Cruise Automation, Uber ATG, Argo AI, etc.) consider LIDAR a must-have sensor for driverless cars. Tesla considers LIDAR as a bulky and expensive sensor and believes in utilizing the concept of computer vision (based on complex neural networks) to create a 3D representation of the driving environment. The approach used by Tesla is inexpensive and easy to integrate with the hardware, but it requires enormous real-time processing capabilities.

LIDAR has been around since the 60s, though there is a high cost involved in manufacturing LIDARs. This makes this technology an expensive choice for the AV industry.

Components of a typical Autonomous Car (Source)


Functions of an Autonomous Vehicle

Perception


RADAR and ultrasonic sensors were crucial during the inception of the AV dream; however, as market players strive to achieve higher levels of autonomy, more advanced technologies such as LIDARs for sensing and cameras for computer vision have taken over.

Perception plays a vital role in the operation of an autonomous vehicle. As humans classify information as relevant and irrelevant, perception allows an AV to weed out unnecessary information from the information collected from its surrounding. Elements on the road such as other vehicles, obstacles, lane/road markings, traffic signs, and the (possible) future states of these elements aid the perception system of an AV.

Perception systems collate and fuse information about such elements from various sources such as LIDAR and cameras. One such example is correlating edges in the image from a camera with the edges in a LIDAR point cloud.

While most of the major players in the market depend on the combined strength of sensors and cameras for perception mapping, Tesla has placed its bets on camera systems and does not believe in having a bulky and expensive sensor such as LIDAR. Tesla believes if humans can perceive and navigate with the help of their eyes, AVs can, too, with cameras (as their eyes) for computer vision.


Localization

Localization is the ability of an AV system to determine its position relative to its environment. Since deciding the position and orientation of an AV is a difficult task, localization is one of the core competencies required for the smooth working of an AV.

Satellite navigation systems such as GPS and the Russian GLONASS, in conjunction with inertial navigation systems, are traditional solutions for localization. These satellite-based systems tend to be accurate up to 1-2 meters, which in the world of AVs is considered an enormous gap – cars frequently operate within a meter of each other.

To ensure seamless and safe driving, AVs require a system that offers an accuracy of around 10 (or even less) centimeters. Accuracy is essential and is dependent on the quality of instruments and the signal strength. Imagine being on a highway when the signal received by the AV tells it that it is in the middle of a road, but in reality, it is nearer to the curb or another vehicle. Situations such as these can be a huge threat to human life and the trust of AVs. The AV market players are aggressively scanning the roads to collect data that can train their models to reduce inaccuracies.

Map-aided localization algorithms, particularly Simultaneous Localization and Mapping (SLAM) have found greater acceptance in recent years. The goal of SLAM is to create a consistent process in which a robot builds a map representing its spatial environment while keeping track of its position within the built map. Statistical modeling is at the heart of the SLAM and is responsible for reducing the gap between the features estimated by the sensors and the actual position. This is accomplished by using the odometry of the vehicle in question. Along the same lines, inertial navigation systems using gyroscope, accelerometer, and signal processing techniques improve accuracy.

Autonomous Car Components for Position Acquisition (Source)



Prediction & Planning


DARPA’s Urban Challenge (DUC) in 2007 paved the way for the future of AVs. Although only six out of 35 AVs were able to complete the track, Learning gained from the challenge demonstrated that a comprehensive planning framework is quintessential for an AV to function efficiently.

Modern AVs have planners as a part of their technology. A planner determines the steps necessary for the AV to navigate its route safely. It also deals with the convoluted and ambiguous situations determined by the perception system. After careful analysis and prediction, the planner makes the required decisions.

A typical AV planner follows a hierarchical framework with the following steps:

  • Mission Planning

This phase considers the objectives and how those objectives evolve. E.g., having a particular destination as an objective, the various routes that can be taken to achieve that objective (or reach that destination) evolves as the vehicle continues to drive over time. Planning is typically performed via graph search over a directed path that reflects the path network connectivity. This form of planning can be considered as a rule-based layer that uses a set of algorithms to achieve the objectives.

  • Behavioral Planning

This phase builds on top of the rule-based layer offered by mission planning. This is achieved by a complex goal setting that enables an AV to reason about multiple potential scenarios and their corresponding responses. The objective of the behavioral planner is to choose the right behavior correctly to ensure safe and efficient path planning. A behavioral planner considers the rules of the road and both- static and dynamic objects around the AV. E.g., when an AV reached a 4-way intersection, the behavioral planner is responsible for planning the intricacies like when to stop, when to move, when to take a turn, etc. The behavioral planner must also ensure that this planning is computationally efficient.

  • Motion Planning

This phase is responsible for computing a path from the AV's current position to the final position (determined by the mission planner), without any collisions. The motion planner must complete local objectives within the high-level tasks involved in reaching the destination. The objective is to generate the most appropriate AV trajectory. This is a complex computational problem where having a complete algorithm can require searching all the exhaustive paths and determining the appropriate, obstacle-free trajectory.


Understanding the data collected by a typical AV


Last year, Waymo released its open dataset to the autonomous vehicle researcher community. Other companies, such as Lyft and Argo AI, have taken similar steps. These data are a window into the way these companies are using components, such as LIDAR sensors and cameras, to excel in the race to Level-5 of autonomy.

Lumenci has performed a brief comparative analysis of three major players- Waymo, Lyft, and Argo AI. The table below highlights a breakdown of the information that can be extracted from open datasets released by the three companies.



Size and Coverage of the three datasets:


Waymo The release contains data from 1,000 driving segments. Each segment captures 20 seconds of continuous driving, corresponding to 200,000 frames at 10 Hz per sensor. The dataset also includes LIDAR frames and images with vehicles, pedestrians, cyclists, and signage carefully labeled, capturing a total of 12 million 3D labels and 1.2 million 2D labels.


Lyft The release includes more than 55,000 human-labeled 3D annotated frames, a drivable surface map, and an underlying HD spatial semantic map. The semantic map has over 4000 lane segments (2000 road segment lanes and about 2000 junction lanes), 197 pedestrian crosswalks, 60 stop signs, 54 parking zones, and 8-speed bumps.


Argo The release includes three different types of datasets. The first dataset includes sensor data from 113 scenes, with 3D tracking annotations. The second dataset consists of approximately 300,000 scenarios where each scenario further consists of motion trajectories of the observed objects. The third dataset is a set of HD maps.


The dataset released by Waymo is one of the richest, largest, and most diverse multimodal corpora ever released for research purposes.

While some companies are making their datasets available for research, General Motors-owned Cruise Automation is open-sourcing its web-browser based tool- Webviz. Webviz is a robotics data visualization tool that can be used to build dashboards that display data collected by an AV. Text logs, 2D charts, and 3D depictions of the AV environment are few examples of the type of data that can be visualized on a typical Webviz dashboard

The idea behind making such proprietary resources available is to empower researchers and data enthusiasts to explore and learn about practical concepts of Level-5 Autonomy such as 2D and 3D perception models, behavioral planning, domain adaptation, etc.


Level-5: Close enough?


The world is continuously and consistently moving towards developing “Everything as a Service” -- the XaaS Business Model. Level 5 autonomy in fleet operations will indeed be a game-changer in a “Transportation as a Service” paradigm. But here is the key question: when will this happen? Currently, some companies just started their level-4 testing, and some are still progressing at level-3.

The future of autonomy is dependent not just on building perfect autonomous vehicles, but also on creating a safe space in which both humans and robocars coexist. Building trust is a quintessential intangible asset for the AV market players. Not long ago, there were unfortunate incidents involving autonomous vehicles that shook the policy trajectories enormously.

Players in the AV industry are working relentlessly to achieve full autonomy. However, without gaining the trust of the larger user-base and policymakers, it will be hard to achieve level-5 soon.

AUTHOR

Richa Taldar

Associate Consultant at Lumenci

Richa is a Telecommunication and Networking Expert at Lumenci. She has extensive experience in Product Testing and Teardowns. Richa holds a Master's degree in Information Systems Management from Carnegie Mellon University.

53 views
  • Facebook | Lumenci
  • Twitter | Lumenci
  • LinkedIn | Lumenci

©  2020 by Lumenci Inc.