Thursday, July 12, 2012

Automation's Biggest Irony (after all these years): The Non-Surprise

Bainbridge (1987) in the "Ironies of Automation" observed automatic equipment seems to function best when the workload is light and the task routine; when the task requires assistance because automation is incapable of handling a novel situation, causing a spike in the operator's workload, this is when the automatic equipment is of least assistance. This is the 'irony' of automation. 

This "irony" seems to have some relevance to the crash of Air France 447 as reported by IEEE Spectrum. In short, the pilot had no idea as to why the autopilot may have disengaged suddenly at cruising altitude -- a surprise (!) -- which resulted in inappropriate pilot inputs. (The pilots were unaware that all three air speed sensors (pitot tubes) were defective -- giving incorrect inputs to the flight computers due to the formation of ice crystals  -- and as the autopilot didn't have airspeeds to work with, it automatically disengaged.)



The biggest irony of automation, after all these years of human factors research and design, should really be viewed as a "non-surprise" for the following reasons:

  1. Automation is not fail-proof and it can result in dangerous consequences when the human operator is suddenly made-in-charge of an [automation] failure, thrusting him/her in a situation when the stakes are high and the time on hand is less. 
  2. A sudden failure in automation in a highly complex system, whose inner workings are opaque to the operator, may prove beyond the cognitive means of a highly stressed (panicky) operator to troubleshoot the situation and recover on time. 
The above (#2) happens when a pilot is suddenly made to shift roles from a passive monitor ["out-of-the-loop"] to an active operator ["into-the-loop"] and is forced to grapple with the situation and grasp what is going on by rapidly developing a veridicial mental model of the situation). Furthermore, this ability could be impaired due to danger or stress-induced impoverishment of an operator's cognitive control (rational thinking) resulting in disorganization of thought and/or inappropriate responses. (The latter topic forms the intellectual underpinnings of "High Velocity Human Factors.")

Years of experience have shown that invariably automation will abdicate its responsibility, when its performance envelope has been exceeded and bewilder the operator -- which should come as no surprise to the designers. So I will refer to it as a Non-Surprise. Thus it behooves designers to provide "means" -- that are not mentally taxing, e.g., requiring cognitive transformations and inferential reasoning --  where a highly stressed operator can comprehend and take control of a non-normal situation. But what are the "means" to this end? I will reserve this for another post.  

Moin Rahman
Founder/Principal Scientist
High Velocity Human Factors "HVHF" Sciences
http://hvhfsciences.com/
HVHF on Facebook
http://www.linkedin.com/company/hvhf-sciences-llc



2 comments:

  1. I find this a fascinating area, as to some extent it cuts to the root of the role that machines can play in facilitating human activities, so please forgive a bit of a ramble . .
    As a general point, I am not so surprised that automation works best when workload is light and the task routine, as it seems to me that we give too little emphasis to evaluating where and how automation will be effective in the first place. We seem to have a collective expectation that machines will ultimately be able to replace almost every and any human activity, when in reality machines and human beings differ in their strengths and weaknesses. Indeed automation could be viewed as simply the delegation of human activities to an external system, to free up capacity in our higher thought processes, much as the cerebral hemispheres delegate to the cerebellum, except that 'artificial' automation can extend to new sensors and 'actors' as well.
    Automation is often faster, more accurate and more reliable (repetitive) than humans at making logical evaluations and inferences where detailed procedures and guidelines (automation protocols) have been provided (by human beings). That the human beings are required to provide the automation protocols in the first place, also has an important benefit, - that a single instance of automation can articulate the collective knowledge, know-how and design efforts of a wide base of subject matter experts. The addition of speed, accuracy and reliability to this makes for a potent combination. One should also keep in mind the immense humanitarian, ethical, economic, risk and engineering benefits in excluding human beings from some activities.
    However, I submit that humans remain more effective than machines at reacting to the unexpected in keeping with the common values and priorities of humankind (or indeed of social groups, including formalised command structures). These values and priorities can be complex and the ability to articulate them quickly and correctly may require substantial training, and even if it doesn't, it may lean on the psychosocial processes of 'growing up' over time within a particular social or cultural setting. It is very difficult to confer these abilities onto machines, not least because human beings have a variety of indirect ways of applying them, including the accumulation over time of intuition, and the use of affective states (emotions). These latter points also give humans an obvious advantage over machines in relating to other humans. However rigorous the preparation, design and assurance, the very nature of entropy means that events will always be able to take any of an undefinable number of unexpected turns. We try to anticipate and accommodate those eventualities we have experienced (by design), and even try to add in provision for those we can only imagine (usually by generalising from related experiences, but also to some extent by abstract thought), but we can never cover all possibilities. This makes human judgement, initiative and performance under pressure impossible to automate completely. Therefore, in many cases, we may have more success in teaming humans with machines, than in replacing humans with machines; i.e. partial automation rather than total automation.

    ReplyDelete
  2. This brings a further consideration into play, - a type of Human Machine Interface, where the machine is 'responsible' for maintaining Situational Awareness (SA) in the human. This is most easily seen in the autopilot situation Moin refers to, where it could even be viewed (with 20:20 hindsight!) as unreasonable to expect a pilot to react quickly to an automation failure or drop-out, where the automation has been operating a system with a highly complex configuration of states, many of which can be quite abstract. True many of these states will be instantly expected by a well-trained pilot, but unless the pilot is trained -exhaustively- in all possible states of both system and environment (and permutations thereof) then the possibility of a critical loss of SA remains, and the pilot will not have the up-to-date knowledge to act with maximum effectiveness. And let’s not forget the dynamics of the automation-to-pilot failover. Indeed, such a situation can lead to a conflict of 'purpose' between the pilot and aircraft systems, such as the well-known air show crash when the pilot was trying to gain height while the aircraft system was trying to land.
    I tend to see the solution to this latter conflict as being to make the automation provide better feedback to the pilot, to keep him apprised of the detailed configuration of the system, even while on autopilot, - but even this approach has many problems (the complexity of the configuration and system modes, maintaining attention/interest over long periods of time, - to name but a few).
    What does the community think?

    ReplyDelete