Driverless Cars Stumble: Advertisements Confuse AI, Pedestrian Behaviour Poses Challenges
The rapid advancement of autonomous vehicle technology, while promising, continues to face significant hurdles, with recent incidents highlighting the AI’s struggle to differentiate between real-world scenarios and artificial representations. As trials of self-driving taxis are set to commence in London and have already been conducted in various British locales, including York, concerns surrounding the safety and reliability of these computer-controlled vehicles remain a paramount issue.
Professor John McDermid, a government advisor on self-driving vehicles and a software expert from York University, recently shared insights into the ongoing challenges faced by autonomous systems. Speaking at London’s Science Media Centre, Professor McDermid explained that even sophisticated AI systems can be easily misled, leading to potentially hazardous situations.
One particularly striking example involved a driverless car that mistook a life-size advertisement for the 2015 film The Man from U.N.C.L.E., prominently displayed on the side of a bus, for a group of actual pedestrians in the road.

The AI, interpreting the static images of actors as human beings, triggered an emergency stop. This sudden manoeuvre, while intended to avoid a perceived collision, could have posed a significant risk to vehicles following behind, highlighting a critical flaw in the AI’s object recognition capabilities.
Professor McDermid elaborated on this incident, stating, “One of the automated vehicle companies I work with had a situation where their vehicle did a sudden emergency stop because it’s all pedestrians in the road, except they weren’t. It was a life-size advert on the side of a bus, but to an AI, it was human beings. That seems very obvious [to us], but actually, to the AI, it’s not.”

Beyond misinterpreting static advertisements, driverless cars in trials conducted in York have also been confounded by the unpredictable behaviour of pedestrians. Professor McDermid noted instances where vehicles were confused by individuals crossing the road even after the traffic light had turned green, and while the “no crossing” signal was still flashing.
In contrast, the United States, a pioneer in self-driving car development, often enforces strict penalties for “jaywalking.” This legal framework may encourage pedestrians to adhere more closely to traffic signals, potentially simplifying the driving environment for autonomous systems. However, in Britain, the pedestrian often holds a more assertive position, a nuance that current robot vehicles appear to struggle to fully comprehend.
Professor McDermid further illustrated this point with an observation from York: “It’s seen that there’s a traffic light, so identified the hazard, because the light is red. It changes to green, the vehicle is about to move off. But this is York, so the tourists – although the lights change to green – still walk across the road. Computer vision doesn’t understand what it doesn’t have models for in the world. It doesn’t know what a roundabout is.” This indicates a fundamental gap in the AI’s ability to understand contextual nuances and unwritten social norms that govern road usage.
Global Rollouts and Lingering Safety Doubts
Despite these challenges, the deployment of autonomous vehicles continues globally. American company Waymo is slated to begin its driverless taxi trials in London from Easter. Plans are in motion for Uber to integrate this scheme, eventually offering robotaxis to the paying public.
However, past incidents in San Francisco involving Waymo vehicles raise further questions. Approximately two years ago, school crossing guards, often referred to as “lollypop ladies,” reported numerous near misses with these futuristic cars.
A survey of 30 such attendants revealed that nearly a quarter had experienced a “close call” with an autonomous vehicle, with some forced to run to avoid being hit.
Veteran crossing guard Theresa Dorn recounted three such near-misses within a single year. In one harrowing incident, a parent had to intervene to save a child: “The parent grabbed the child, looked at the car – and there was nobody driving it. Why do they have these driverless cars? I think somebody should be driving them.” Her sentiment reflects a broader public unease about the complete removal of human oversight from the driving process.
Public Apprehension and the Need for Higher Standards
In the United Kingdom, official government guidance stipulates that “self-driving vehicles should be held to the same high standard behaviour as that expected of human drivers.” However, a significant portion of the public surveyed believes that the standards for autonomous vehicles should be even more stringent. This apprehension is understandable, particularly given the annual road fatalities in the UK, which hover around 1,600 individuals.
Professor McDermid has issued a stark warning, urging that pedestrians should not become a “moral crumple zone” for the burgeoning fleet of robocars. This highlights the ethical considerations at play and the need for robust safety protocols that prioritize human well-being above all else. As the technology matures, it is clear that addressing these fundamental safety and perception issues will be crucial for the widespread acceptance and successful integration of driverless cars into our daily lives.







