top of page
Writer's pictureSQ

To Err Is Human—And So Is Human-dependent Technology

Photo source: X/@jsuryareddy


Just over a week ago on 24-Nov, three men knowingly obeyed their navigation app but unknowingly plummeted to their deaths. They were traveling from out-of-town at night towards a more rural part of Uttar Pradesh, India, when Google Maps instructed them to drive on to a long bridge that eventually sent them flying more than 15m (about 5 stories) down onto the dry Ramganga River. The wreck was only found in the morning by local villagers based nearby.


To understand how this misfortune occurred, we must first put several factors into context. The bridge had collapsed earlier that year due to floods and remained in disrepair. Locals were aware of its condition and knew to divert their routes around it. However, the three travelers were in unfamiliar territory and needed Google Maps to guide them. Photos of the bridge show it was long and straight, an unconscious invitation to drive fast—though not necessarily to speed—especially under the impression that the road was fine. It certainly didn’t help that Google Maps' directions effectively legitimized the bridge (authorities claimed there were barricades but that locals had removed them). In rural India, streetlamps are few, if any, and virtually nonexistent on a bridge still under construction.


We could thus visualize the scene leading up to the accident, their car confidently being led up the unbending route over the dark void, the throw of the headlamps revealing a constant path appearing out from a black wall, with nothing to stop them from flying off the bridge's abrupt mid-air end.


There’s much to unpack here: the response time needed to spot a cliff’s edge, the braking distance required at highway speeds, the driver’s visual field and the saliency of an unexpected hazard in pitch darkness, or even the apparent lack of safety signs and barriers warning against using a once-functioning bridge left unrepaired for almost a year. But let’s focus instead on human-technology interaction and how this incident exposed the limitations of our automated tools and artificial intelligence—limitations that many of us conveniently overlook


Don't get me wrong. As human factors professionals we do love our automation. It sits high on the Hierarchy of Intervention Effectiveness. It does most of the dirty work we've taken for granted. Statistics from the U.S. National Safety Council showed that advanced driver assistance systems decreased crash and insurance claim rates. Novice surgeons performed better with less errors when using computer-assisted image-guided navigation systems for keyhole surgeries. Even venting your frustrations to an AI-powered chatbot can improve your mental well-being, as one Singaporean researcher discovered. The rise of GPS and navigation apps has diminished the cognitive effort once demanded by street directories and road atlases.


The Hierarchy of Intervention Effectiveness


Yet the gifts of innovation seldom come without burdens. As Wall-E forewarned, convenience often dulls capabilities—consider how swiftly we now turn to navigation apps, deskilling us in spatial awareness and wayfinding. Automation, despite its veneer of independence, remains fundamentally reliant on us helping them help us. Technology adheres strictly to human-derived rules, executing them with a consistency humans cannot match, much like a well-trained animal than an intelligent entity that consciously acquires and integrates knowledge. To cite a gentleman I'll never meet but whom my colleagues from Wisconsin adore, the "sit-stay fallacy" tricks us into believing technological wizardry as factual intellect.


Google Maps relies on its users to provide live updates, complaints, notifications, and latent data to keep the app current and generate reliable directions. Thereafter, it needs human technicians to make sense of and verify this information before eventually communicating it in a way that the software understands. It’s unlikely that locals or provincial officials in the rural area prioritized informing Google about the broken bridge in their neighborhood. This holds true across the globe, where populations exhibit varying degrees of tech-savviness. The app itself isn't capable enough to autonomously pick up on the news about bridges being washed away by floods and putting two and two together. It thus suffers from a variation of GIGO, but given its strong track record, we continue using it with a false sense of security.


Is it therefore a weakness of automation or humans, if the tireless machine fundamentally depends on human vigilance for success?


One of the best pieces of advice I still share was adapted from another gentleman I’ve actually met: adding automation is like introducing an alien team member—one who neither speaks the same language nor shares the same cultural assumptions. Like people, automations are also imperfect, prone to making mistakes that seem so silly in the realm of humans but are logically sound for our alien friends. Autocorrect still ducks up if it hasn't figured out your typing quirks. It took years to teach Roombas to recognize and avoid smearing pet poop all over the floor. Medical devices expect you to be proficient in R2D2's native language of beeps and whirs to understand what they are desperately trying to say. The navigation app on your phone won't be able to recognize a broken bridge up ahead.


"Generate an image of Singaporeans in front of iconic Singapore landmarks". How many deformed limbs and disfigured faces can you spot? No prizes, sorry.


The factors surrounding human-automation interaction were among the many Swiss-cheese holes that aligned. If only such a hazard was firmly cordoned off and regularly reviewed, the driver would have been compelled to question or even disregard the navigation app's instructions. If you spot a safety booby-trap, don't simply ignore and walk away. Sound it out and ensure others notice it too. Place an obstacle over a spill so as to nudge people around it. Tape down trip hazards, sharp corners, and unsafe power sockets before raising it to the relevant departments. At the same time, stakeholders should not delay in rectifying known booby-traps and expect others to just be careful instead.


Serious trigger warning ahead. Consider stopping here.


This story resonated particularly because of another one that still haunts me today. Exactly five years ago, a mother traveling in Sabah, Malaysia innocently filmed her toddler from behind as she walked across an incomplete, unbarricaded link bridge. At the very end of the bridge was an inconspicuous gap, the wall's black border probably creating a visual illusion of closure. CCTV footage as well as her mum's video showed the girl disappearing into the hole, falling five stories to her death.



122 views0 comments

Recent Posts

See All

Commentaires


bottom of page