Skip to main content
conference lunch move company map contacts lindholmen lindholmen 2 travel info

logo

Will We Need to Start Protecting AVs From People?

Once the roads have been completely taken over by AVs, what’s to stop pedestrians from completely taking over the roads?

On a recent episode of Startalk, a science, pop-culture and comedy talkshow hosted by science-ambassador (and astrophysicist) Neil deGrasse Tyson, the topic of self-driving vehicles took an interesting turn. The discussion focused on the observation about the potential system effect of AVs, namely, that people will hamper AVs from going about their business (i.e. by jaywalking with impunity) because, unlike human-driven vehicles, they no longer pose a threat. 

Featuring Malcolm Gladwell, a well-known author and journalist for the New Yorker, the episode starts off with Malcolm recounting how he attempted to get run over by a Waymo vehicle. He would run around the vehicle, next to it, in front of it and so on, trying to get the AI ‘driver’ to bump into him. He points out that Waymo’s AI is the most long-suffering driver ever. It never loses its temper, never makes you feel vulnerable, scared, or threatened, even when you as the pedestrian are trying to get run over.

This raises an interesting question which they discuss at some length: jaywalking when the road is full of AVs. The argument that Malcolm puts forward is that the primary reason people don’t jaywalk is quite simply because it is dangerous. The danger comes from human drivers that are attention limited, irritable, in a rush, and prone to errors. Essentially, the opposite of Waymo. The consequence of this, they hypothesis, is that in a world full of AVs, pedestrians will likely interrupt traffic flows at the drop of the hat. Do you need to get to work in a hurry? Just walk across the freeway. Do you want to go for a run? The road is a perfect place for it. That leaves self-driving vehicles struggling to navigate in any meaningful sense.

In the discussion a possible solution to this is raised: programming some ‘psychopathy’ into the code. If you make the cars a bit dangerous, maybe people would think twice before stepping out into traffic?

Personal Comment:

We think that this discussion raises several interesting themes and relevant questions that are not often discussed. First is a theme that might loosely be called a complex system theme. A lot of ink has been spilt praising the potential efficiencies and benefits that self-driving cars can bring such as reduced congestion, lowered emissions, reduced noise, increased mobility, and more. What this discussion points out, however, is that human systems that use and interact with transport are complex and will not have easily predictable reactions. Second is the theme of regulating new technology. By regulation here we are thinking generally, of official legal regulation as well as unofficial social regulation along the lines of etiquette.

It is, we think, obvious that designing AVs to be unpredictably dangerous is a non-starter for addressing the potential of disruptive pedestrian interaction – the talkshow is meant to be part comedy after all! There are, however, other ways of restraining this potentially problematic interaction between humans and AVs.

One way is to focus on physical barriers. Most major roadways that connect cities include fencing that is designed to keep wild animals, who are unaware of the potential danger, out of harm’s way. Similarly, we could ensure AVs uninhibited movement by erecting physical obstructions to pedestrians. This may work for specific lanes, or connection routes, but would be unacceptable (and ridiculous) if attempted in city centers. Such an approach would greatly reduce the potential benefits of AVs.

Another way of constraining pedestrians, cyclists, and so on is to change the regulatory landscape, making jaywalking or interrupting AVs have some other significant consequence beyond physical harm. This would require greatly increased surveillance, new laws, and successful enforcement. Such an approach might work but it comes at a high social cost and may be impossible under regulation like GDPR in Europe. In a full-scale Orwellian scenario, even the vehicles themselves may record and report…

Lastly, a softer way forward might come from evolved social etiquette with respect to the interaction between people and AVs. The introduction of mobile phones was socially disruptive at first, for example, until cultures adjusted and figured out socially acceptable ways of using the new technology. Perhaps something similar can be expected for AVs. A worry here, however, is that social etiquette takes time to develop whereas the transition to AVs might be too quick.

The discussion between Neil, Malcolm, and Chuck (the co-host) is worth watching both for comedy and for the social and behavioural questions they raise for the implementation of AVs. As intimated earlier one of the primary questions raised is something like a cost benefit question: How much cost and damage are we willing to pay for smarter vehicles? Racial bias, corner cases, and unforeseen consequences can increase the social and economic consequences in unexpected ways. These questions can’t be answered in any simple way, but it seems to us that one clear implication is that figuring out how to coordinate learnings from mistakes will be essential. How can manufacturers learn from each other’s mistakes in ways that lead to a safe and workable AV future?

Whenever we manage to design safe-enough AVs their widescale adoption will impact mobility and infrastructure in many ways. However that unfolds, social and behavioural considerations must take centre stage.  

Written by Håkan Burden & Joshua Bronson,
RISE Mobility & Systems