This pilot brings together the driverless technology of Motional and the smart infrastructure technology of Derq, an MIT spinoff, to demonstrate a world class and cutting-edge example of V2X (vehicle to everything). In the pilot, cameras, placed high up in busy intersections and connected to Derq’s AI systems running on roadside computers, will transmit data to Motional’s vehicles. The press release reports that:
“From their elevated vantage point, Derq’s system can see cars exiting parking lots, pedestrians stepping between parked cars, and cyclists weaving through cars stopped at a red light. It can also predict their movements to provide advance notice of potentially challenging interactions with Motional’s vehicles.”
Initially, Motional and Derq will evaluate the different viewpoints (from Motional’s sensor suite and Derq’s cameras) offline. By studying the data offline, they can replay a scene over and over, and examine how the driverless vehicle navigates the scene with and without the extra data from Derq’s cameras.
Later in the pilot, Motional’s vehicles will receive Derq’s bird’s-eye view data in real time to help inform their driving. The vehicles will process that view alongside the data received through the vehicles’ LiDAR, cameras, and radar, to make the safest and smartest driving decisions.
Two high traffic and complex intersections in Las Vegas will be used during this pilot. The intersections were selected for their complexity, which includes:
- Significant pedestrian and cyclist interactions (pedestrian crossings and bike lanes)
- Challenging sight lines due to trees, signs, and other large obstacles
- Complex vehicle interactions among multiple lanes of traffic
Motional is a joint venture between Hyundai and Aptiv started in 2020. In describing the company’s approach to the self-driving problem, CEO Karl Lagnemma put it, “Choosing how to get from A to B safely, that’s an emotional decision. So Motional will keep that insight centrally to every product we develop.”
Driving is no easy feat. And it is dangerous. The CDC shows the consequences of this around the world in numbers: 3,700 people killed per day, with nearly half of those being vulnerable road users (pedestrians, cyclists, and motorcyclists). This leads to a yearly death toll of 1.35 million, and there are many more injuries.
From a pure perception point of view, we humans are arguably not the most optimally designed for driving. First of all, humans are incapable of responding to something outside of our perceptual field, like a vehicle or pedestrian out of sight because it is blocked by a tree, other vehicle, or the sun reflected off a nearby building. Self-driving cars can improve on this through the use of multiple sensors located onboard the vehicle at once, synthesizing input from vision, radar and lidar – all with different abilities to detect hazards that may not be visible to the human driver – to give a 360-degree field of perception.
But another limitation that comes from being made out of mere flesh and blood is that we can only perceive the world from one particular vantage point at a time. Computers do not face this limitation, as anyone familiar with the concept of IoT (or Internet of Everything) can tell you. So why not leverage this advantage too in the journey to make self-driving cars safer than their human counterparts? V2X capabilities, such as the bird’s-eye view provided by Derq, could certainly give Motional’s self-driving vehicles a distinct advantage over that of other self-driving systems without V2X integrated. But more data is not always the optimal solution to complex problems, and this approach comes with its own risks. Relying on V2X infrastructure could make the scaling up of self-driving vehicles too dependent on local authority budgets and the rollout of that infrastructure. That being said, Motional seems aware of this as they specifically state that the trial is focused on high complexity intersections, meaning that they probably envision this technology to be supplementary rather than integral to their self-driving vehicle’s abilities.
It will be interesting to follow the results of this pilot and see if any of the other self-driving developers will jump onboard V2X as well.
Written by Joshua Bronson,
RISE Mobility & Systems (Människa-autonomi)