Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2007 2. REPORT TYPE 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Adaptive Automation for Human-Robot Teaming in Future Command 5b. GRANT NUMBER and Control Systems 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Army Research Laboratory,Human Research & Engineering REPORT NUMBER Directorate,Aberdeen Proving Ground,MD,21005 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT see report 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE Same as 30 unclassified unclassified unclassified Report (SAR) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 THE INTERNATIONAL C2 JOURNAL David S. Alberts, Chairman of the Editorial Board, OASD-NII, CCRP Joseph R. Lewis, Managing Editor The Editorial Board Berndt Brehmer (SWE), Swedish National Defence College Reiner Huber (GER), Universitaet der Bundeswehr Muenchen Viggo Lemche (DEN), Danish Defence Acquisition and Logistics Organization James Moffat (UK), Defence Science and Technology Laboratory (DSTL) Mark Nissen (USA), Naval Postgraduate School Ross Pigeau (CAN), Defence Research and Development Canada (DRDC) Mink Spaans (NED), TNO Defence, Security and Safety Associate Editors Gerard Christman, U.S. OSD Technical Services - Femme Comp Inc. R. Scott Cost, Johns Hopkins University Applied Physics Laboratory Raymond J. Curts, Strategic Consulting, Inc Paul K. Davis, RAND Corporation Petra M. Eggenhofer, Munich Bundeswehr University Elliot Entin, Aptima Stuart Grant, Defence Research and Development Canada (DRDC) Juergen Grosche, FGAN-FKIE, Germany Paul Labbé, Defence Research and Development Canada Michael Malm, Swedish Defence Research Agency Sandeep Mulgund, The MITRE Corporation Philip W. Pratt, Northrop Grumman Jens Roemer, Fachschule der Bundeswehr für Informatiktechnik Pamela Savage-Knepshield,U.S. Army Research Laboratory, Human Research & Engineering Directorate Keith Stewart, Defence Research and Development Canada (DRDC) Andreas Tolk, Old Dominion University About the Journal The International C2 Journal was created in 2006 at the urging of an inter- national group of command and control professionals including individuals from academia, industry, government, and the military. The Command and Control Research Program (CCRP, of the U.S. Office of the Assistant Secretary of Defense for Networks and Information Integration, or OASD- NII) responded to this need by bringing together interested professionals to shape the purpose and guide the execution of such a journal. Today, the Journal is overseen by an Editorial Board comprising representatives from many nations. Opinions, conclusions, and recommendations expressed or implied within are solely those of the authors. They do not necessarily represent the views of the Department of Defense, or any other U.S. Government agency. Rights and Permissions: All articles published in the International C2 Journal remain the intellectual property of the authors and may not be dis- tributed or sold without the express written consent of the authors. For more information Visit us online at: www.dodccrp.org Contact our staff at: [email protected] The International C2 Journal | Vol 1, No 2 | 43–68 Adaptive Automation for Human-Robot Teaming in Future Command and Control Systems Raja Parasuraman (George Mason University) Michael Barnes (Army Research Laboratory) Keryl Cosenzo (Army Research Laboratory) Abstract Advanced command and control (C2) systems such as the U.S. Army’s Future Combat Systems (FCS) will increasingly use more flexible, reconfigurable components including numerous robotic (uninhabited) air and ground vehicles. Human operators will be involved in supervisory control of uninhabited vehicles (UVs) with the need for occasional manual intervention. This paper discusses the design of automation support in C2 systems with multiple UVs. Following a model of effective human-automation interaction design (Parasuraman et al. 2000), we propose that operators can best be supported by high-level automation of information acquisi- tion and analysis functions. Automation of decisionmaking func- tions, on the other hand, should be set at a moderate level, unless 100 percent reliability can be assured. The use of adaptive automa- tion support technologies is also discussed. We present a framework for adaptive and adaptable processes as methods that can enhance human-system performance while avoiding some of the common pitfalls of “static” automation such as over-reliance, skill degrada- tion, and reduced situation awareness. Adaptive automation invoca- tion processes are based on critical mission events, operator modeling, and real-time operator performance and physiological assessment, or hybrid combinations of these methods. We describe 44 The International C2 Journal | Vol 1, No 2 the results of human-in-the-loop experiments involving human operator supervision of multiple UVs under multi-task conditions in simulations of reconnaissance missions. The results support the use of adaptive automation to enhance human-system performance in supervision of multiple UVs, balance operator workload, and enhance situation awareness. Implications for the design and field- ing of adaptive automation architectures for C2 systems involving UVs are discussed. Introduction Unmanned air and ground vehicles are an integral part of advanced command and control (C2) systems. In the U.S. Army’s Future Combat Systems (FCS), for example, uninhabited vehicles (UVs) will be an essential part of the future force because they can extend manned capabilities, act as force multipliers, and most importantly, save lives (Barnes et al. 2006). The human operators of these systems will be involved in supervisory control of semi-autono- mous UVs with the need for occasional manual intervention. In the extreme case, soldiers will control UVs while on the move and while under enemy fire. All levels of the command structure will use robotic assets such as UVs and the information they provide. Control of these assets will no longer be the responsibility of a few specially trained soldiers but the responsibility of many. As a result, the addition of UVs can be considered a burden on the soldier if not integrated appropriately into the system. Workload and stress will be variable and unpredict- able, changing rapidly as a function of the military environment. Because of the likely increase in the cognitive workload demands on the soldier, automation will be needed to support timely decision- making. For example, sensor fusion systems and automated decision aids may allow tactical decisions to be made more rapidly, thereby shortening the “sensor-to-shooter” loop (Adams 2001; Rovira et al. 2007). Automation support will also be mandated because of the high cognitive workload involved in supervising multiple unmanned PARASURAMAN ET AL. | Adaptive Automation 45 air combat vehicles (for an example involving tactical Tomahawk missiles, see Cummings and Guerlain 2007). The automation of basic control functions such as avionics, collision avoidance, and path planning has been extensively studied and as a result has not posed major design challenges (although there is still room for improvement). How information-gathering and decision- making functions should be automated is less well understood. However, the knowledge gap has narrowed in recent years as more research has been conducted on human-automation interaction (Billings 1997; Parasuraman and Riley 1997; Sarter et al. 1997; Wiener and Curry 1980). In particular, Parasuraman et al. (2000) proposed a model for the design of automated systems in which automation is differentially applied at different levels (from low, or fully manual operation, to high, or fully autonomous machine oper- ation) to different types or stages of information-processing func- tions. In this paper we apply this model to identify automation types best suited to support operators in C2systems involved in interact- ing with multiple UVs and other assets during multi-tasking mis- sions under time pressure and stress. We describe the results of two experiments. First, we provide a validation of the automation model in a study of simulated C2 involving a battlefield engagement task. This simulation study did not involve UVs, but multi-UV supervi- sion is examined in a second study. We propose that automation of early-stage functions—information acquisition and analysis—can, if necessary, be pursued to a very high level and provide effective support of operators in C2 systems. On the other hand, automation of decisionmaking functions should be set at a moderate level unless very high-reliability decision algo- rithms can be assured, which is rarely the case. Decision aids that are not perfectly reliable or sufficiently robust under different opera- tional contexts are referred to as imperfect automation. The effects on human-system performance of automation imperfection—such as incorrect recommendations, missed alerts, or false alarms—must be considered in evaluating what level of automation to implement (Parasuraman and Riley 1997; Wickens and Dixon 2007). We also 46 The International C2 Journal | Vol 1, No 2 propose that the level and type of automation can be varied during system operations—so-called adaptive or adaptable automation. A framework for adaptive and adaptable processes based on different automation invocation methods is presented. We describe how adaptive/adaptable automation can enhance human-system perfor- mance while avoiding some of the common pitfalls of “static” auto- mation such as operator over-reliance, skill degradation, and reduced situation awareness. Studies of human operators supervis- ing multiple UVs are described to provide empirical support for the efficacy of adaptive/adaptable automation. We conclude with a dis- cussion of adaptive automation architectures that might be incorpo- rated into future C2 systems such as FCS. While our analysis and empirical evidence is focused on systems involving multiple UVs, the conclusions have implications for C2 systems in general. Automation of Information Acquisition and Analysis Many forms of UVs are being introduced into future C2 systems in an effort to transform the modern battle space. One goal is to have component robotic assets be as autonomous as possible. This requires both rapid response capabilities and intelligence built into the system. However, ultimate responsibility for system outcomes always resides with the human; and in practice, even highly automated systems usu- ally have some degree of human supervisory control. Particularly in combat, some oversight and the capability to override and control lethal systems will always be a human responsibility for the reasons of system safety, changes in the commander’s goals, and avoidance of fratricide, as well as to cope with unanticipated events that cannot be handled by automation. This necessarily means that the highest level of automation (Sheridan and Verplank 1978) can rarely be achieved except for simple control functions. The critical design issue thus becomes: What should the level and type of automation be for effective support of the operator in such systems (Parasuraman et al. 2000)? Unfortunately, automated aids have not always enhanced system performance, primarily due to PARASURAMAN ET AL. | Adaptive Automation 47 problems in their use by human operators or to unanticipated inter- actions with other sub-systems. Problems in human-automation interaction have included unbalanced mental workload, reduced situation awareness, decision biases, mistrust, over-reliance, and complacency (Billings 1997; Parasuraman and Riley 1997; Sarter et al. 1997; Sheridan 2002; Wiener 1988). Parasuraman et al. (2000) proposed that these unwanted costs might be minimized by careful consideration of different informa- tion-processing functions that can be automated. Their model for effective human-automation interaction design identifies four stages of human information processing that may be supported by auto- mation: (stage 1) information acquisition; (stage 2) information analysis; (stage 3) decision and action selection; and (stage 4) action implementation. Each of these stages may be supported by automa- tion to varying degrees between the extremes of manual perfor- mance and full automation (Sheridan and Verplank 1978). Because they deal with distinct aspects of information processing, the first two stages (information acquisition and analysis) and the last two stages (decision selection and action implementation) are sometimes grouped together and referred to as information and decision automa- tion, respectively (see also Billings 1997). The Theater High Alti- tude Area Defense (THAAD) system used for intercepting ballistic missiles (Department of the Army 2007) is an example of a fielded system in which automation is applied to different stages and at dif- ferent levels. THAAD has relatively high levels of information acquisition, information analysis, and decision selection; however, action implementation automation is low, giving the human control over the execution of a specific action. Parasuraman et al. (2000) suggested that automation could be applied at very high levels without any significant performance costs to early-stage functions, i.e. information acquisition and analysis, particularly if the automation algorithms used were highly reliable. However, they suggested that for high-risk decisions involving con- siderations of lethality or safety, decision automation should be set at a moderate level such that the human operator is not only