Pitfalls of Automation
Development efforts for automated systems are usually justified in part by their presumed impacts on human performance: a reduced workload, enhanced productivity, and fewer errors. Automation has generally failed to live up to these expectations, due not to the automation itself but to its inappropriate application and design [1]. Studies of failures in automation reveal an "epidemic of clumsy use of technology" [2] which creates new cognitive demands on the operator rather than freeing his/her time, diverts the user's attention to the interface rather than focusing the user's attention on the job, creates the potential for new kinds of errors and "automation surprises" rather than reducing errors, and creates new demands and more difficult knowledge and skill requirements rather than reducing the operator's knowledge requirements.
A key finding [3] of these studies is that "strong, silent" systems, i.e. those implemented as black boxes, are difficult to direct and result in a system with only two modes: fully automatic and fully manual. In such cases the human operator is apt to interrupt the automated agent and take over the problem entirely if the agent is not solving the problem adequately. This situation results from the "substitution myth" [3] whereby developers often assume that adding automation is a simple substitution of a machine activity for a human activity. Instead, partly because activities are highly interdependent or coupled, adding or expanding the machine's role changes the cooperation needed and the role of the human operator. Table 1 summarizes the apparent benefits of automation in contrast to empirical observations of operational personnel [3].
Better results are obtained from "substitution" of machine activity for human activity. | Practices are transformed; the roles of people change. |
Work is offloaded from the human to the machine. | Creates new kinds of cognitive work for the human, often at the wrong times. |
Operator's attention will be focused on the correct answer. | Creates more threads to track; makes it harder for operators to remain aware of and integrate all of the activities and changes around them. |
Less operator knowledge is required. | New knowledge and skill demands are imposed on the operator. |
Errors are reduced. | New problems and potentials for error are introduced. |
Billings [4] enumerates several fundamental attributes which are common to
occurrences of failure in automation and human/ automation interaction:
- Automation Complexity: The details of machine functions may appear quite simple because only a partial or metaphorical explanation is provided, and the true complexity of the operation remains hidden from the user.
- Coupling Among Machine Elements: Internal relationships and
interdependencies between or among machine functions are not made obvious to
the user, resulting in unexpected and apparently erroneous behavior.
- Machine Autonomy: Real, self-initiated machine activity
requires the human operator to determine if the perceived behavior is
appropriate or represents a failure.
- Opacity (Inadequate Feedback): The machine does not
communicate what it is doing or why it is doing it, or communicates poorly or
ambiguously.
- Peripheralization: Complex machines tend to distance
operators from the details of an operation, and if the machines are reliable,
operators will over time become less concerned with and aware of the details
of the operation.
- Brittleness: The system performs well while it is within the
envelope of its design but behaves unpredictably otherwise.
- Clumsiness: The operator has little to do when things are
going well, but the computer demands more activity from the operator at times
when the workload is already high.
- Surprises: The machine behaves in an unexpected, apparently
erroneous manner.
Human-Centered Design Principles for Automation
Lessons learned from these past failures have led to an understanding of the importance of human-centered design principles in the development of intelligent and automated systems. Essentially, the human-centered design approach to automation seeks to keep the operator in command, requiring that the operator be informed and involved with monitoring the automation. An intelligent system must be inspectable (i.e. provide indications of what it is doing and why), predictable (i.e. support the operator’s need to anticipate the behavior of the system), repairable (i.e. allows the operator to assume control and fix the system), intelligent (i.e. learn from operator overrides), maintainable and extensible [5]. The interface design must provide obvious opportunities for action on the part of the operator and must be tailored around the operator’s activities at all levels of interaction. Perhaps the most significant finding is that the human operator must likewise be inspectable and predictable to the intelligent system. This requires that a formal model of the human operator’s tasks and actions be created. This model is used to design the interaction environment and can also be exploited by the intelligent system to perform operator intent inferencing [6].Billings [4] offers the following “first principles” as essential elements of an over-arching philosophy for human-centered systems:
- Premise: Humans are responsible for outcomes in human-machine
systems.
- Axiom: Humans must be in command of human-machine
systems. This is axiomatic if one accepts the premise. The axiom
implies certain corollaries which are consistent with past experiences with
human-machine systems.
- Corollary: Humans must be actively involved in the processes
undertaken by these systems.
- Corollary: Humans must be adequately informed of human-machine
system processes.
- Corollary: Humans must be able to monitor the machine components of
the system.
- Corollary: The activities of the machines must therefore be
predictable.
- Corollary: The machines must also be able to monitor the
performance of the humans.
- Corollary: Each intelligent agent in a human-machine system must
have knowledge of the intent of the other agents.
References
[1] Thurman, D. A., Brann, D. M., and Mitchell, C. M., “An Architecture to Support Incremental Automation of Complex Systems”, Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, Orlando, FL (to appear).
[2] Woods, D. D., Patterson, E. S., Corban, J. M., and Watts, J. C., “Bridging the Gap Between User-Centered Intentions and Actual Design Practice”, on web site http://csel.eng.ohio-state.edu:8080/~csel/BridgeGapUserCtrInt.html.
[3] Woods, D. D., “Human-Centered Software Agents: Lessons from Clumsy Automation”, position paper for National Science Foundation Workshop on Human-Centered Systems: Information, Interactivity, and Intelligence, Arlington VA, February 1997.
[4] Billings, C. E., “Issues Concerning Human-Centered Intelligent Systems: What’s ‘human-centered’ and what’s the problem?”, plenary talk at National Science Foundation Workshop on Human-Centered Systems: Information, Interactivity, and Intelligence, Arlington VA, February 1997.
[5] Brann, D. M., Thurman, D. A., and Mitchell, C. M., “Human Interaction with Lights-out Automation: A Field Study”, Proceedings of the 1996 Symposium on Human Interaction with Complex Systems, Dayton OH, August 1996, pp. 276-283.
[6] Callantine, T., “Intent Inferencing”, on web site http://www.isye.gatech.edu/chmsr/Todd_Callantine/CHII.html.
[7] Mitchell, C. M., “Models for the Design of Human Interaction with Complex Dynamic Systems”, Proceedings of the Cognitive Engineering Systems in Process Control, November 1996.
[8] Thurman, D. A. and Mitchell, C. M., “A Design Methodology for operator Displays of Highly Automated Supervisory Control Systems”, Proceedings of the 6th Annual IFAC/ IFORS/ IFIP/ SEA Symposium on Man-Machine Systems, Boston MA, July 1995.
[9] Thurman, D. A. and Mitchell, C. M., “A Methodology for the Design of Interactive Monitoring Interfaces”, Proceedings of the 1994 IEEE International conference on Systems, Man, and Cybernetics, San Antonio TX, October 1994, pp. 1739-1744.
[10] “Field Guide for Designing Human Interaction with Intelligent Systems”, Draft, on web site http://tommy.jsc.nasa.gov/~clare/methods/methods.html, December 12, 1995.
No comments:
Post a Comment