Wednesday, December 3, 2014

Police Decision Making: Science, Policy and Practice for the Use of Deadly Force

Once the proverbial "pull the trigger" decision is made either to let out a Verbal Volley or Fire a Single Shot, it is almost impossible to apply the brakes. The outcome resulting from this could be a frayed relationship with a colleague at work or a loss of life on the streets during police work. 

The last one was seen recently with the police shooting in Ferguson, Missouri, where Officer Darren Wilson fatally shot and killed 18-year old, Michael Brown -- in addition to the precious loss of life of a young man, there were repercussions from riots to citizens' loss of confidence in the police itself. 


DECISION MAKING IN LIFE THREATENING SITUATIONS

Research in evolutionary psychology and cognitive science show the underlying reasons as to why we humans act in a preemptive manner (use force), particularly when life and limb are at stake, even before all the facts are ascertained. They are:
  1. Time pressure
  2. Physical survival under threat (or) loss of property 
  3. (1 & 2 causing) Danger-induced emotional arousal and biased decision making that favors self preservation. 
The simplest way to describe the above is by analyzing the structure of the human brain. Our brain carries the baggage of our evolutionary history, from the time we evolved from reptiles to small mammals and eventually the primates that we are today (homosapiens: Latin for "Wise Man" or "Thinking Man"!). This is revealed in the structure of our triune (3-layered) brain, where the reptilian brain is at the lowest, followed by the Intermediate brain at the next higher level, and the Rational Brain at the highest level.  



Our base instincts pertaining to self preservation and aggression (including quenching hunger, sexual drive, bowel and bladder functions), are largely governed by the primitive or reptilian brain. Whereas mental processes that concern higher-order thinking and symbolic manipulation, say, composing music or reading a map, operate in the rational brain.

So in other words, we the homosapiens, the supposed "Wise Man" are not really WISE when it is to do with  decision making when survival or self preservation are at stake. 

Furthermore, when it is a matter of survival, we would rather assume that the perceived threat is true (or a positive), in the spur of the moment, even if turns out to be false after examination or later reflection. 

Why?  It is better to be wrong than to be sorry (after the fact, say, injury or death). 

Evolutionary psychologists call it the "Snake in the Grass Effect." For example, if we were walking in the woods and get a feeling that something is rubbing on our shin, our non-conscious, reptilian brain makes us jump back even before we get a chance to determine the source for that feeling. Later examination might reveal that we just happened to rub our shin on the bark of a tree giving us that "scaly feeling"! Thus, the "Snake in the Grass Effect."

If in reality that "scaly feeling" turned out to be a tree bark that caused us to jump back in alarm, then, it was a false positive; however, regardless of the error, we have not lost a thing. Perhaps our heart rate and stress hormones levels momentarily elevated due to the hardwired flee or fight response. On the other hand, what it if that "scaly feeling" really happened to be snake? And it is quite possible that on that rare occasion, it might have well turn out to be a real rattle snake with scales! (True Positive). Jumping back in alarm, may actually have helped us survive!



Snake in the Grass Effect


SURVIVAL: DECISION MAKING ON THE POLICING BEAT

How does all this play into policing and decision making?

Police officers are human, too, and succumb to the same decision making processes described above that are governed by the reptilian brain and false positives (snake in the grass effect). Furthermore, their decision making maybe affected due to implicit biases when a suspected person belongs to another racial or ethnic category. Alas, that is how the brain is wired given its evolutionary history.

BUT, this is no excuse for police officers to open fire on innocent citizens. To prevent this, police departments have policies such as Use of Force Continuum (picture below), as to when the use force is appropriate and, thus, can be escalated. (A recent addition are body-worn cameras to deter the officer from unwarranted use of force.)



The classic definition for the philosophy of policing, which drives much of training and policing practice in the US is informed by the scholar Egon Bittner's (1985) classic paper*. He observed:
"The police are best understood as a mechanism for distributing nonnegotiable coercive force in accordance with an intuitive grasp of situational threats to social order. This definition of the police role presents a difficult moral problem; setting the terms by which a society dedicated to peace can institutionalize the exercise of force...."
But how does a police officer, in high stakes situations, get an intuitive grasp of situation threats? And how does one prevent false positives, particularly when transitioning from Level Four use of force to Level Five. And, in practical terms, under stressful situations, when danger-induced emotional arousal (reptilian brain), drives much of cognition, is it even possible to recall the Use of Force Continuum? 

These questions need to be asked and researched and solutions developed by taking a multi-pronged approach in the following areas:
  • Selection and recruitment procedures of police officers (by taking into consideration individual profiles (psychological and  personality attributes); and appropriate screening to determine whether a candidate has innate or maladaptive cognitive and physical abilities for policing).
  • Police training curriculum and methods (techniques and simulations to impart knowledge, skills, abilities to tamp down hardwired responses such as the "Snake in the Grass Effect").
  • Policies, procedures and protocols (on use of force; buddy-system; back-ups).
  • Technologies that monitor and/or augment officers' contextual-intelligence (person & place) and real-time situational awareness.
Before I conclude this article, I want us to consider a hypothetical question, which is both daring and crazy at once, a heresy even to utter in the context of policing in the United States:
Would having unarmed police officers conduct community policing reduce the TOTAL number of unwarranted killings -- loss of lives -- of both Citizens and Police Officers?
I am not sure what the answer would be. Because, it is unacceptable for any loss of innocent life, be that of an officer or a citizen. 


But by asking the above question, I raise a plausible solution (pointers, really) in terms of officer recruitment, training, police comms. & computing technology and policy.  Because from a human factors standpoint an unarmed police officer should have built-up extraordinary abilities to diffuse a situation, without the use of force.  In other words, our hypothetical unarmed police officer needs to have the following:
  • high level of skills in communications (persuasion/dissuasion, body & verbal language); 
  • expertise in naturalistic decision making (ability to quickly discern the type of situation, then engage or disengage from person & incident -- particularly in an one-on-one situation where there is uncertainty about the level of threat and the suspect's desire to inflict bodily harm on the police officer);
  • augmenting pre-engagement decision making with technology (sensors, warnings, pre-engagement alerts) that enhance contextual intelligence and situational awareness and enables the right go/no-go decision;
  • Socio-psychological abilities (command presence, language, tone of voice, community engagement) & physical fitness and expertise in martial arts 
All of the above, in my opinion, can contribute in the officer maintaining the locus of control and confidence. (Often times, it is a loss of confidence or fear, which leads to pulling the trigger.)

The take-away message is policing requires men and women with extraordinary capabilities and skillsets in multiple dimensions. They not only need physical strength, but also wit and wisdom on the fly. In other words, they need to be real HOMOSAPIENS, a.k.a., "the Wise Man" that we are capable of being when our rational brain is operational. What can and should be done by policy makers, researchers, recruiters, trainers, commanders, and actual policing practice, so that we have "wise men and women" police officers on the ground? And, more importantly, can it be realized in policing culture and practice quickly enough to prevent the next Ferguson? 

About the author:
Moin Rahmanis a Principal Scientist at HVHF Sciences, LLC. He specializes in:

"Designing systems and solutions for human interactions when stakes are high, moments are fleeting and actions are critical."

For more information, please visit:



E-mail: moin.rahman@hvhfsciences.com

Wednesday, August 27, 2014

FirstNet: Reaping the Benefits of the "Broadband" by Aligning it with Social and Human Factors

The First Responder Network Authority, a.k.a., #FirstNet is certainly a "First" in more than one way. In addition to serving as a Public Safety Broadband Network (PSBN) for First Responders, it is also a "First" in terms of its potential for bridging the gulf between public safety socio-technical systems (organizations) and citizens at-large through a variety of Social Media and communication platforms (operating on 4G LTE networks with smart phones, tablets, wearable devices, and the like serving as end-user devices).
The traditional model for communication between citizens and first responder organizations, which is currently constrained by legacy systems, needs to be taken to the next level into a new era, made possible by FirstNet. This is to realize communication and computing "utilities" and widgets that are citizen-centered, public safety-focused and community service oriented solutions. For example imagine the following possibilities: 
  1. Consider the utilization of distributed computing, smart analytics and intelligent sensors (citizens included, citizens as sensors!) assets that are endowed with the capability to "pull" the right first responder assets to the site of the incident even before a formal voice call is made to 911 to a PSAP (public safety answering point)
  2. Enable the first responders develop a veridical mental model of the different aspects of the situation (e.g., chemical spill; bodily harm; natural disaster) so that they rightly equip and approach the situation
  3. Facilitate the development of situation awareness (Perception, Cognition and Projection of the Unfolding event, its non-equilibrium dynamics, potential for harm) so that the First Responder team and commander can employ "naturalistic decision making" to develop strategies on the fly as to how to respond and contain it; etc.
Last but not least, Social Media-like platforms can also be employed by exploiting the "broadband" of the PSBN within and among public safety agencies with the goal of enabling human and organizational interoperability; overcoming fragmentation and compartmentalization of data, records, and (human expertise) institutional memory. So that they can be either tapped into -- or intelligently "pushed" to -- the public safety personnel at any given time. The beneficiaries may range from a first responder at the tactical edge or sharp end of the system to command in the backend of the system.


New developments such as Cloud Computing and Data transmission at broadband speeds alone will not suffice if they are not aligned with social and human factors of citizens and first responders. The imperative here is to identify "Data" that is relevant to the emergency situation (i.e., non-normal and abnormal situations) thus turning it into actionable "Information" and, then, processing and presenting that information in a form that can be comprehended -- i.e., turned into "Knowledge" -- by all cognitive agents (humans / first responders to AI/Computing systems) in the system. This may involve developing novel transcoding techniques to developing human-machine learning systems that complement each other, leverage their respective cognitive computing strengths (perceptual vs. conceptual gist), and, thus, acting as a force-multiplier at the tactical-edge and society at-large.
To summarize, how the "Broadband's" extraordinary potential is harnessed to deliver utility to human and machine assets is contingent on understanding the interactions and coupling between them. An understanding of this requires performing cognitive ethnography in the field -- city streets to the fireground -- and applying both classical human factors and high velocity human factors (HVHF) to design intuitive user- and cognitive interfaces between first responders and their radios, computing and data devices. So that the power and speed of state of the art computing and communication technologies (wearables to cockpit interfaces to cloud computing to Internet of Things) are delivered to first responders' 'proverbial' finger tips and minds in cognitively digestible "chunks, volumes and velocities" -- in other words, in a highly intuitive format even when first responders don't have enough human (cognitive) bandwidth of their own to interact with technology due to high workload, stress triggered by a high stakes situation or imminent danger or situational impairment (smoke, water, debris in the environment) or due to personal injury.


In closing, First Responder Network Authority (FirstNet) is one of the best ever challenges yet to the critical communication industry to up its game where its rate of innovation has been pretty flat compared to the goings-on in the world of consumer technology. FirstNet behooves the mission critical industry to develop new paradigms and breakthrough innovations in partnership with first responders to both predict as well as react when it comes to protecting the life & limb of citizens in particular and the precious intellectual, cultural and material wealth of free nations at-large.

-----------------------
The author, Moin Rahmanis a Principal Scientist at HVHF Sciences, LLC
For more information, please visit:
http://hvhfsciences.com/
http://www.linkedin.com/in/moinrahman

HVHF Article Archive: http://hvhfsciences.blogspot.com/
E-mail: hvhf33322@gmail.com

Monday, June 9, 2014

Wednesday, June 4, 2014

Critical Communications for the Cognitive Age

"Critical Communications for the Cognitive Age"
by Moin Rahman in 
LTE TodayMay 2014, (p. 28 -33)

In my article, I discuss how a (socio-technical) systems, or STS, approach is required when designing critical communications for LTE to realize its full potential as well as create an ecosystem for reliability, anti-fragility and human interoperability insofar critical communications and computing are concerned. Taking such a systems approach -- from the back-end, through the backbone of a LTE network and to the sharp end of the system -- can deliver unrivalled benefits to professionals and first responders performing at the tactical-edge and the human agents in the system at-large both during normal and, more importantly, abnormal situations.
Article (pages 28 - 33 only) for download:Drop Box (or) Google Docs

The author, Moin Rahmanis a Principal Scientist at HVHF Sciences, LLC
For more information, please visit:

http://www.linkedin.com/in/moinrahman


HVHF Article Archive: http://hvhfsciences.blogspot.com/

E-mail: hvhf33322@gmail.com


Tuesday, March 18, 2014

Touch Screen User-Interfaces: Touching to KNOW vs.Touching to say NO

Touché to TOUCH?!

We have evolved the sense of Touch to Know, to glean information about an object.

In this context, a physical object is its own user-interface. It doesn't require a Capacitive or Inductive Touch Screen to probe it and get a pixilated answer on the screen!

Why?

By touching an object we learn about its status. Obtain feedback whether it is hot/cold, rough/smooth, dangerous/safe, clean/dirty, ripe/unripe, etc.

Sometimes we touch objects with a purpose.
  • to brush away dirt
  • to make indents
  • to scratch or scour it off some wanted or unwanted material

Touch as a Mode of Interaction

Physical manipulation -- pushing a button, flicking a toggle, pulling a T-handle, turning a knob/wheel, etc. -- was the norm for user-interaction in the industrial age. One literally had to overcome the force of the mechanism (which by the way also provided valuable haptic and kinesthetic feedback, but fatiguing from a muscular effort standpoint) while interacting with them. Thus they were referred to as Machine Cowboy interfaces.

Next came the Analog Professional where the physical effort was made easy due to hydraulics, solenoids and actuators (e.g., Power Steering). And user-interface technology and interaction grammar evolved over time. Now we are in the touch input epoch that has been extended to things such as Fly-by-Wire and Drive-by-Wire. Where an input, say, on a touch screen or joy stick is converted into a digital signal, which, in turn, changes the speed of the HVAC fan in a car or the flaps on the wing of a plane. 

But when and where did Touch interaction first appear? You would be surprised to learn that the earliest touch interaction was more on the physical continuum and didn't involve a LCD screen; because it was the degree of pressure exerted on the interface was the input!

The world's first touch interface was the Electronic Sackbut. (Follow this link for an illustrated history of Touch Input Technologies).

1948: The Electronic Sackbut: The right hand controls the volume by applying more or less pressure on the keys; the left hand control four different sound texture options via the control board placed over the keyboard (Courtesy: NPR

Now let us compare the Electronic Sackbut's user-interface with an ubiquitous piece of technology of our time, the iPhone.

iPhone's Touch Interface
The iPhone with its multi touch user-interface (e.g., pinch, rotate, swipe, etc.) is a marvel. But there is one big difference between the electronic sackbut and the iPhone. The gateway to touch interaction on the iPhone, the "icons" are filled with semiotic information: Symbols; Signs; Text.

Thus one needs to perceive and interpret the semiotic information, visually and cognitively, before deciding to do something with it. The iPhone certainly is not a problem when visual or cognitive attention are not fragmented, which is not the case when one is multitasking (e.g., driving and using the phone).  And, there are many other tasks besides driving, which involve multitasking.  For example, it could be a public safety professional such as a police officer who needs to be vigilant about his environment; that is, not be visually tunneled with his eyes riveted on the screen of his radio communication device, compromising his own safety in the process.

The challenges faced in user-interaction during multitasking not only apply to a touch screen, but also for an UI bedecked with an array of physical push-buttons that have similar characteristics.

Another important noteworthy point is that the touch and feel of the icons on the iPhone are one and the same. They don't distinguish themselves from each other on the tactile / haptic / pressure dimension. They all feel the same, even with a haptic vibe, and, thus, provide the same affordances.
An affordance is a property of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling. (via Wiki)

Varieties of Physical Affordances
Some affordances maybe contextually goal driven: e.g., using a hammer as a paper weight.

The concept of affordance has also been extended to encompass virtual objects. Although, some experts tend to disagree with this definition as it lacks physical feedback.  E.g., touching a touch-sensitive icon "affords" an action: a feature or app is opened. Or in a mouse point and click paradigm, icons, radio buttons and TABS are affordances (Figure below).

Virtual Affordances on a Graphical User-Interface (GUI)

Touching to KNOW vs. Touching to say NO

I began this article by explaining the importance of touching to "know," a naturally evolved human ability that makes interacting with objects in the world intuitive (second nature). Now, contrast this with touching to say No (Figure below).

"Touching to say 'No'": this is a touch interaction that is contingent on correctly comprehending and processing the semiotic information. It requires a higher level of visual attention and cognitive effort.

A pure semiotic interface, with like-affordances, is just not limited to touch screens. But it may also include an array of buttons (same "push" affordance). Although, physically pushing a button, in an array of similar buttons has a tactile / kinesthetic dimension to it, one still needs to cognitively process the icon or label on the button. So in some ways, they are similar to icons arranged in an array on a touch screen, all with the same physical affordances. This indeed can pose a problem in multitasking environments, such as driving, where one may have to visually look at the buttons, perceive the semiotic information and select the appropriate one, and, then, push it.

The array of similar push buttons with same affordances (except for 3 knobs) on this multi band mobile radio used inside a police car provide a physical dimension to the interaction, which is good. But from a semiotic point of view, they are similar to a touch screen and may impose similar visual and cognitive workloads in a driving / multitasking context. (Image via Motorola Solutions)

Automotive Industry Going-ons with Touch Input

A recent headline was an eye grabber for designers in the automotive and technology worlds:


The Center Stack of a Ford Vehicle. Regardless of the control being virtual or a physical button, there is a heavy reliance on semiotic information, including very similar affordances ("push") (Image and full article at: Extreme Tech)
Ford's move towards replacing virtual touch buttons with physical buttons may yield some performance improvements but may not be a significant one. Due to reasons discussed above: similar affordances, semiotic dependence.

Besides a heavy reliance on semiotic information, there has also been a push towards a reliance on inferential reasoning and separated control-to-display relationships on different planes and surfaces that result in additional cognitive load. The Cadillac CUE Infotainment System (video) is one such example. It illustrates the amount of learning and inferential reasoning required to interact with it.

Cadillac CUE Infotainment System


But there is some good news. There have been some novel ideas about touch screen design for cars. See video below.

Novel Ideas for Touch Screen Interface in Cars (Detailed article in Wired)

The Future: Mixed Modal Interactions

Our naturally evolved way of interacting with other humans, animals, objects and artefacts in the world involve touch, speech, gestures, bodily-vocal demonstrations (including facial expressions), among other things. Could a human-machine interface, particularly in a critical piece of technology (medical, critical comms., aviation, automotive, command & control rooms, etc.), be built to be compatible with what's natural to us?

Speech interfaces have gained both credibility and popularity (thank you Siri) and gestural interfaces are moving on from gaming apps to other utilitarian technologies such as cars. See figure below.

Drawings from a 2013 Microsoft patent application suggests gestures that would serve, from left, as commands to lower and raise the audio volume and a request for more information.CreditUnited States Patent and Trademark Office  via New York Times

As we march into the future, be it a car, robot or a treadmill, a semiotically-laden, like-affordances heavy, buttons-galore or touch-only UI, filled with metaphors and inferential reasoning, may not be a good idea. Consider these two examples as my closing statement as to WHY?:

How many of us can recount the experience of inadvertently changing the speed instead of the incline when running on the treadmill? In most treadmills, both these controls have like-affordances (push buttons in ascending order or up/down arrows) and or mirror-imaged on either side of the display. But how many of us when running at 7 mph can distinguish the semiotics (text / symbol) on these buttons?
Or consider the case of Powering-off a Toyota Prius instead of putting it in Park? (both Power and Park controls are "push button" controls with like-affordances!)
Toyota Prius: Power and Park have the same affordances ("push"). When one is not paying sufficient attention, one is prone to commit the "Error of Commission." Pushing one for the other.

Going forward, we may need a mixed-modal UI that might present multiple ways of interacting with technology to accommodate what comes most naturally to the user based on his/her situation, context and current workload. This also is contingent on the levels of automation and intelligence that might be incorporated in a machine, device or appliance.

In the meantime let's keep in mind, as good as touch screens get to be, their qualities should not be viewed as the "Midas Touch" for user-interaction design.

In closing, every one of us must remember Bill Buxton's primary axiom for design in general and user-interfaces in particular:
"Everything is best for something and worst for something else."

The author, Moin Rahmanis a Principal Scientist at HVHF Sciences, LLC
For more information, please visit:

http://www.linkedin.com/in/moinrahman


HVHF Article Archive: http://hvhfsciences.blogspot.com/


E-mail: hvhf33322@gmail.com


Additional Reading





Saturday, January 4, 2014

"SITUATION AWARENESS" - Say what?

...whose situation awareness are we talking about?: human, sensor, radio, computer or infrastructure?

"Situation Awareness" along with "intuitive design" have become buzz words in the Critical Communications industry. One finds these words a lot these days in marketing brochures, sales talk and presentations at technology tradeshows. Claims are made that one needs to buy Product X or Technology Y because it enhances the situation awareness of either a firefighter at the tactical edge or an utility control room operator in the backend of a system.

SITUATION AWARENESS - "Say what?"

The question is, if someone is using these terms -- "situation awareness" or "intuitive" and "user-friendly" user-interfaces -- for marketing purposes, do they provide any human factors-based measures to back it up. Hard, empirical data that quantifies the supposedly enhanced situation awareness of a mission critical professional who might be on the fireground, or back in the control room of a nuclear power plant?

"nah!" 

Rarely does one hear the details about situation awareness, or SA:

  • the process by which it is acquired.
  • the nature of SA as a product
    • i.e., perception of task relevant data; its comprehension towards enhancing the operator's SA of system state (cohorts, teams, commander's intent, condition of the machine agent(s) or system); that is, what they -- the co-worker, team, systems -- are doing? why they are doing what they are doing? (just not information, but an understanding or knowledge of what's going on; Figure 1)
    • projection of future states (e.g., estimated time for backup help to arrive; wind direction in a hour from now, of relevance to a wildland firefighter; readiness of trauma care center to receive casualties from an accident site in the next 30 min.)
  • the measures or quantification of SA
    • what did the operator become aware of which he was not aware of previously?; did he acquire this SA with effort (probed the system), or effortlessly? -- where a Smart System alerted him to the impending danger?  
Figure 1: Task-relevant Data / Information when comprehended turns into knowledge and, thus, enhances operator SA


Varieties of SA

SA certainly is not acquired easily by humans, even if it is in the immediate space or environment due to phenomenon such as inattentional blindness, attentional tunneling, cognitive distraction, or information overload.

Additionally, SA is not the sole dominion of individual humans. Members of a team can have SA about what's going on in the socio-technical system (Shared SA); An individual or team that is geographically distributed can have SA about different aspects of a system (Distributed SA). A machine or system can have SA about what other sub-systems or humans are doing (m2m; machine-to-machine communication), which radios have been registered on a critical comm. wireless network and their locations (Systems SA). Or when human and machine collaborate together to acquire SA, with a tacit acknowledgement that in certain aspects the machine is better than human and vice-versa, then, it would be Joint SA.

Acquisition of SA

To acquire SA of a situation, the following are required:

  • Sensor 
  • Transducer
  • Computer 


As machinistic as the above may sound, it is not necessarily so. The above could very well be a human. For example in the case of a human: An eye or ear is a sensor. The nervous system is a transducer (takes the raw signal -- light or sound -- and converts it into a coded neural signal); the computer is the brain, where the signal is decoded and interpreted. "For example, a police officer on hearing a sound may react with: "Ah! what I heard was a gun shot. My partner should be in trouble!"

A self-driving car, or autonomous vehicle, is an example of a machine acquiring SA, where it may either choose to accelerate or brake at appropriate moments.

The only difference between human and a machine -- both, by the way are intelligent cognitive agents in their own right -- is the former excels in pattern recognition and novel situations; whereas the latter algorithmic thinking approach never tires nor loses vigilance due to monotony or having a hangover!

Three mini case studies: SA obtained and missed

Sandy Hook Elementary School Shooting

Figure 2: Children being evacuated from the Sandy Hook Elementary School by Connecticut Police
In the second deadliest mass shooting in American history, twenty students, ages 6 and 7, and six adults were killed at the Sandy Hook Elementary school on December 14, 2012.

Figure 3: A graphic depicting the site of the shooting. (CNN)
As soon as the shooting began, 911 began receiving calls. In this incident, teachers and the school custodian, were the eyes and ears ("sensors, transducers and computers") who were instrumental in describing and narrating the gruesome goings-on in Real Time & Real Space. This information thus transmitted via phone, by "humans" ["cognitive computing" at source and onsite], with emotive intonations and ambient sounds, were instrumental in building the SA of for law enforcement.  

In this case, it is hard to imagine if a machine agent could have equalled or surpassed the cognitive computing performed by human agents onsite with regards to facilitating SA acquisition to law enforcement. However, Joint SA, where surveillance video from classrooms along with the human narration of events might have been superior. Note: A human is really good at reading another human's (active shooter) intent.

Verdict: SA Obtained to the extent possible

Asiana Air Crash

A 16-year old girl who survived the crash of an Asiana Flight 214 in San Francisco was tragically killed by "multiple blunt injuries" when she was run over by a rescue vehicle. 

Figure 4: Asiana Crash at SFO in July 2012
This tragic accident was due to the fire engines quickly spraying thousands of gallons of water and foam, which seems to have obscured the driver’s view of a human figure on the tarmac. He was unaware (missed the first step of SA acquisition: sensing and perceiving) of the object/person in his vehicle's path.

Verdict: SA Disabled

Metro North Train Accident

The recent Metro North Train accident on the Hudson Line that resulted in fatalities was found to be travelling almost three times the permitted speed (82 mph instead of less than 30 mph) into a turn

Figure 4: Metro-North train from Poughkeepsie to Grand Central Terminal, NYC Derailed in the Bronx, via NYT 

Both the driver who allegedly dozed-off and the train (emphasis added) itself were unaware that the train was overspeeding through the turn.  The Driver Alerter, a warning device for keeping the driver awake in the event he was drowsy, was not inside the cab in which the driver was located; nor did the system (train) have a Positive Train Control feature, a track signalling method where the train would have automatically reduced its speed as it approached the curve.  In this case, due to a combination of reasons, the Joint SA (driver + machine/system) was absent, which could be attributed as two major reasons for the accident, among other things.

Verdict: SA Unavailable

Say What to What Next?

SA is acquired by various means in different critical infrastructure domains (public safety, transportation, utilities, etc.).  When a complex socio-technical system is designed, with a number of components -- human agents to machine agents (sensors, radio, telemetry, computers, infrastructure, etc.) -- it is vitally important to set minimum requirements of SA for both human and machine.

Technology vendors should meet the requirements that are dictated by safety, human & system performance requirements under both normal & abnormal situations, and other mandates.  The SA requirements ("needs analysis") have to be identified through either cognitive ethnography or contextual inquiry in the pre-design phase; then, SA specifications set (qualitative and quantitative specifications); and verified through lab and field usability testing, including, live action prototype testing under various equilibrium and non-equilibrium system states (normal to heavy workload to high stakes / high stress situations).

If SITUATION AWARENESS is just used as a "term of art" during design, or as a marketing "buzz word" by a technology vendor, and if we place our trust in it without verification, then it is a great cause of concern. Then our own lack of SA (!) as designers, evaluators and end-users on the important issue of SA needs to be blamed! 

In closing, when a critical infrastructure, socio-technical system is designed, or if a technology vendor makes a claim that their technology enhances Situation Awareness for the first responder or driver, then, it is incumbent on us to verify the following:
  • Varieties of SA required for system performance and/or delivered by technology
  • Process for acquiring SA 
  • SA as 'product' in terms of meeting specifications and fulfilling requirements
  • Measurement of SA
The author, Moin Rahmanis a Principal Scientist at HVHF Sciences, LLC. He specializes in:

"Designing systems and solutions for human interactions when stakes are high, moments are fleeting and actions are critical."

For more information, please visit:



E-mail: hvhf33322@gmail.com