Eyes: the windows to the soul

One of the most safety critical and contentious issues in the automotive world right now is how to effectively monitor a driver’s engagement level while they use partially and fully automated driving functions.

  • Eyes: the windows to the soul

French poet Guillaume de Salluste Du Bartas (1544-1590) was not mistaken when he described the eyes as, “these lovely lamps, these windows of the soul.”

In the context of driver monitoring system (DMS) technology, we believe he was also arguably ahead of his time.

Increasing levels of driver assistance are already providing drivers with the opportunity to do other things in their car while it is operating on public roads. The initial versions of “what’s to come” are already on our roads today, in what is known as Level 2 (partial or semi-automated) driver assistance features, that can control the car’s speed and keep it within the lane, but only in certain circumstances. As reviewed by Consumer Reports, examples include Tesla’s Autopilot and General Motors’ Super Cruise system, now available in the 2018 Cadillac CT6 and 2021 Escalade.

A common feature is that drivers with these systems are able to take their hands off the steering wheel and feet off the pedals under certain driving conditions, for example on geofenced highways. However, automated vehicle (AV) technology is currently incapable of handling many of the situations that are encountered in daily driving. Yet, as humans we adapt quickly and can easily trust such systems and become complacent or unknowingly inattentive.  These issues can, and have, led to tragic consequences, such as the deaths in 2018 of Elaine Herzberg during Uber’s AV testing, and of Walter Huang while he operated a Tesla. And this week, this issue has been highlighted again as another tragic accident involving a Tesla in Texas unfolds.

By now, there is little argument over whether it is necessary to monitor driver state while these Level 2 systems are in use. International standards (like SAE’s J3016) clearly state that at Level 2, the human is responsible for detecting objects and events. Groups, including the National Transport Safety Board (NTSB) in the US, are advocating for DMS, while Europe has already moved to mandate the technology from 2022 and EuroNCAP (New Car Assessment Program) is actively working on its 2025 roadmap to award safety points for vehicles that are equipped with camera-based DMS technology – there are good reasons for this.

Not all DMS are created equal.

Driver assistance features are relatively new and it’s not yet widely understood how these features operate and interact with a driver, and in fact, there is a wide degree of variability in how different vehicle manufacturers (OEMs) implement them.

Camera-based DMS technology faces the driver and monitor their head and/or eye movement to detect where the driver is looking. These are sometimes called a ‘direct’ DMS, or ‘awareness’ DMS.  ‘Indirect’ DMS technology (also called a ‘control’ DMS) includes those that detect vehicle behaviour (e.g. lane departures and steering variability) and those that detect the presence of hands on the wheel. Systems that detect vehicle behaviour are somewhat redundant in L2-capable vehicles (given the vehicle controls tasks like lane keeping), but hands-on-wheel sensors are still widely used.

A common form of hands-on-wheel DMS uses a torque sensor that detects force or input on the steering wheel. Another form called Capacitive hands-on-wheel sensors rely instead on ‘touch’.

Hands-on-wheel sensors might initially sound effective enough, considering that to perform any evasive manoeuvre or take back control the driver must “often” use the steering wheel. But as Consumer Reports noted in their 2020 ranking of automated driving assistance systems, having hands on the wheel “does not necessarily mean the driver is actually looking at the road ahead.”

Driving has always been a highly visual task, and this is being reinforced as vehicles come equipped with more assistance features that do a large chunk of the work on behalf of the driver, but still require driver supervision.  In order to take action in a situation that requires the driver to step in, the driver must notice or be notified of the situation in the first place.

Aside from failing to measure the single most important indicator of driver engagement – whether the driver is cognitively engaged in driving to support safe driving – torque sensors are also extremely simplistic and easy to spoof. Some users achieve uninterrupted hands-free driving by jamming objects that act as a counterweight into the steering wheel. This can fool the system and negate what little safety benefit they may have otherwise had. It also inspires people to record videos, where they pretend to be asleep in the back of the car.

As if these safety shortcomings were not bad enough, the user experience could be the icing on the cake (or wheel). In the case of torque sensors, drivers will often have one or both hands on the wheel, but still get alerted by the system because the sensor hasn’t detected any input.

This frequent ‘nagging’ – as it is called by many owners – typically happens with highway driving or on long straight roads. We’ve seen it many times during our own research with members of the public. It annoys some users, but most quickly work out that ‘jiggling’ the wheel at semi-regular intervals is enough to keep the alerts at bay. They are then free to put their hands wherever they see fit in between. This also highlights that torque sensors cannot assess engagement in a continuous fashion. Instead, it is essentially taking a sample of driver ‘engagement’ at set intervals.

Detecting an object or an event requires the driver to be monitoring the roadway. A driver with their hands (or hand) on the wheel but their eyes off the road, is a driver who is woefully unprepared to take control. In sport, this is like a player in the outfield waiting to catch a ball, with their hands outstretched in front, and their eyes looking down at their shoelaces.

To support driver safety, DMS must be able to detect drivers’ visual attention state. It should know when you are looking at the road, when you are not, and the last time you checked it. A camera-based DMS enables visual attention – the best current available predictor of driver engagement – to be measured, and it can measure this in a continuous fashion, without requiring the driver to perform any unnatural movements that are unrelated to the driving task.

Having hands on the wheel will, in most cases, be a requirement for taking over during an unexpected event, but this step ranks a distant second to ensuring the drivers’ eyes are actually on the road.

The eyes are the window to the soul – it’s never been the hands!