Understanding automation transparency and its adaptive design implications in safety–critical systems
Safety & digital transition
Saghafian, M., Vatn, D. M. K., Moltubakk, S. T., Bertheussen, L. E., Petermann, F. M., Johnsen, S. O. & Alsos, O. A. (2025). Understanding automation transparency and its adaptive design implications in safety–critical systems. Safety Science, 184.
Our opinion
This new work by Norwegian colleagues from Trondheim, bringing together an economist, an automation expert, a designer of complex systems, and human factors specialists, offers an in-depth analysis of both theory and practice on a topic that has been recurring for 40 years: the necessary transparency of automation for the operator. This is a valuable read to discover (or rediscover) many key ideas to uphold as part of the legacy of a body of literature on this topic that has been built over four decades.
The text can inform Foncsi’s strategic analysis of safety practices in the era of digital transformation.
The issue of automation and its (proper) coupling with operators is not new. It emerged in the 1980s with the arrival of new automated aircraft. The topic remains relevant today, as evidenced by recent Boeing 737 Max accidents.
From the beginning, the idea emerged that automation must be designed to be sufficiently transparent so that the operator understands what the automated system is doing and can take control if necessary.
Transparency has been defined, in particular, as
“the level of detail with which automation communicates to the operator the reasoning underlying its advice or solution to a problem”.
Jans et al., 2019
However, in practice, the concept of transparent automation design (for the operator) remains far from unequivocal. At least two quite different perspectives can be identified: the notions of “seeing through” and “seeing into”.
Seeing through
It aims to make the automation as invisible as possible to the operator, acting merely as an efficient but discreet facilitator between manual action and the desired result. This is the case, for example, with tele-robotics.
Seeing into
The approach is far more ambitious. It seeks to enable collaboration between humans and automation by providing a real-time representation of what the autonomous agent is responsible for, its objectives, its capabilities, its internal functioning, and its impact on the human operator’s performance (Jamieson et al., 2022).
The Impact of Transparency on Situation Awareness, Trust, Safety, Workload, and Performance
Transparency has, from the outset, been closely related to the concept of Situation Awareness (SA), defined nearly 40 years ago by Mica Endsley (1988) as “the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status into the near future”.
This theory of Situation Awareness consists of three levels:
- The perception of individual elements in the environment and their properties.
- The comprehension resulting from the integration of perceived information, the deduction of relationships between elements, and the meaning of these relationships.
- The projection into the future to guide the operator in decision-making and action.
Transparency (SAT) model, oriented towards interactive human-intelligent agent systems
The SAT model emphasizes the human need to be aware of both the agent’s situation and the environment. Since then, additional models have been proposed as extensions. These include dynamic SAT (Chen et al., 2018), human-robot transparency (Lyons, 2013), and the coactive system model based on observability, predictability, and directability (OPD) (Johnson et al., 2014).
Human factors specialists have long advocated for maintaining five priorities in the design of transparent interfaces:
- Considering the level of automation.
- Ensuring quality situation awareness.
- Adjusting the level of trust, avoiding both lack and especially overconfidence.
- Achieving the desired performance outcomes.
- Managing the induced workload level.
As the text will explore further, many of these goals are subject to benefit-optimums following a U-shaped curve logic, complicating design challenges and requiring further research.
Time (the system’s usage experience) and, more broadly, the limitation of how much of the automation’s internal functioning is conveyed, are key variables in calibrating the desired transparency in human-machine interfaces (HMI).
This aligns closely with the literature on explainability in AI.
A Nonlinear Relationship Between Transparency Level and Positive Outcomes for the Operator
As a preliminary note, we should recall the historical contributions of Sheridan (1978), who described a continuum in the degree of automation and the distribution of tasks between humans and technology. This continuum ranges from full manual control (level 1, with no autonomy) to full automation (level 5, with total autonomy).
The remainder of the document focuses more closely on 14 studies that placed transparency at the center of human-machine interface design in automated systems. The findings of these studies are grouped by key elements, drawing inspiration from literature reviews on the subject, particularly Van de Merwe et al. (2022).
Variations in the Understanding and Definition of the Term Transparency
Eight key points emerge.
A Widely Recognized Concept
The principle and the very need for transparency are acknowledged by the entire human factors community and by designers of interfaces for automated systems.
Transparency as a Characteristic
Transparency can be defined as a characteristic or capability (of interpretation) of a system. It is understood as the ability to present information in a way that allows humans to understand and predict the future course of action.
The degree of transparency is linked to the system’s ability to communicate its state, internal processes, functions, and capabilities to the human operator, thereby enabling the operator to influence the system (Mbanisi & Gennert, 2022). Transparency quality is measured on a scale ranging from a minimum, where the technology’s behavior is difficult to understand, predict, and direct, to a maximum, where the behavior is easily understandable and predictable (Olatunji et al., 2021).
Transparency as a Goal
Transparency has also been defined as the ultimate goal of a process aimed at creating shared awareness and intent, which would develop over the course of interaction, with needs that vary over time and establish goals to achieve depending on the context.
Transparency and Explainability
A term often closely associated with transparency is explainability, which refers to an autonomous system’s ability to provide users with understandable and meaningful explanations of its behavior and decision-making processes. While this definition is relatively similar to that of transparency, explainability is defined in the literature either as an aspect of transparency or vice versa (Karran et al., 2022; Luo et al., 2019).
What is clear is that the concept of explainability applies exclusively to autonomy, as it systematically refers to explaining what the intelligent agent does to improve performance and trust. This is also reflected in the definition of explainability provided by Olatunji et al. (2021), as the degree to which the behavior and decision-making processes of the autonomous agent are understandable and predictable for the user. If the behavior can be explained to the user, the agent is considered transparent (Roundtree et al., 2021).
Vered et al. (2020) define Explainable Artificial Intelligence (XAI) as intelligence that can justify its complex behavior to the user. Roundtree et al. (2021) consider explainability equivalent to usability in the sense that it influences the user’s perception. However, it is worth noting that a system can be transparent in its internal functioning without being comprehensible.
Transparency and Trust
Transparency has positive effects on trust and acceptance of autonomous systems; and user trust has a significant impact on user acceptance and continued use of the system. The term “calibrated trust” refers to when the actual reliability of the automation matches the trust one can have in the system. Any inconsistency observed during use creates a mismatch, either by overconfidence or mistrust, resulting in a phenomenon known as calibration bias (de Visser et al., 2018). Trust calibration is crucial for appropriate trust in automation (Lebiere et al., 2021) and safety. Unfortunately, it is worth noting that there is no universally applicable measure of trust (de Visser, 2018).
Anthropomorphism in the design of the automation is often advocated as a way to strengthen both transparency and trust, but this concept unfolds across a wide continuum in practice.
However, transparency has its limits, especially when it comes to conveying uncertainties. Kunze et al. (2019) conducted an experimental study to explore how the communication of uncertainties in automated systems affects various metrics. The study highlights the potential benefit of conveying the “inherent uncertainties” of the system. Yet, other studies (Bhaskara et al., 2020) have also shown that excessive transparency regarding all possible uncertainties can overwhelm operators. Akash et al. (2020) emphasize that the negative effect of transparency on workload could compromise other factors such as safety, trust, and acceptance, and stress the need to focus on the interaction between these factors. One of Akash’s main conclusions is that greater transparency is far from always beneficial for trust or workload. The optimal level depends more on a dynamic interaction between trust, workload, decision type, individual experience, and the current status of transparency.
Using Transparent Design at All Interface Levels
The interface between a human operator and an intelligent agent can be considered at three levels: informational, communicative, and physical. Transparent design can be applied at each of these levels.
For example, at the informational level, it is important to consider the amount of information and the extent to which it is intuitive.
At the communication level, transparency can be enhanced by incorporating quick responses.
At the physical level of the interface, the aesthetic features of the interface, such as the use of emotional expressions when appropriate, can contribute to transparency and, ultimately, acceptance (Wang et al., 2022).
Transparent design must transmit both verbal and visual information through visual and vocal channels (Wang et al., 2022) in real-time, and verbal feedback should be synchronized with visual feedback. Displaying visual information is essential to improve transparency. Key automatic features should be displayed on-screen in a way that facilitates tracking the automation’s activity (Skraaning & Jamieson, 2021).
Adapting Design to Users and Their Conceptual Needs, Adopting Contextual Adaptive Interfaces
Transparency can be improved by making a system more adaptive and user-driven: the system adapts to the specific needs of the user in each particular context. An interface can be designed to present only basic information, leaving the operator the option to access a menu to add context-specific elements, explanations of options chosen by the automation, and information about anticipated actions and upcoming situations.
In addition to individual design considerations, it is also important to account for contexts. The order in which information is presented, its quantity and quality, the timing, and the modes of communication must be adapted to the context to enable adaptability.
More broadly, the priority should be given to demand-driven transparency rather than rigid sequential transparency.
Improving Shared Awareness within the Team
Roundtree et al. (2021) conducted an analysis of transparency in visualization and concluded that collective visualization could be considered more transparent. Indeed, operators, despite individual differences and varying capabilities, were able to perform similar tasks. Moreover, it imposed a lower overall workload, created fewer physical and temporal demands, and caused less frustration.
How Far Should Transparency Go in Complex Systems?
In critical situations, priority must be given to data that meets the demand and to the balance between quality and quantity, which advocates for dynamic adaptability of the interface and suggests moving away from a rigid and uniform interface design.
These implications also seem to align with the concept of explainability, which is gaining importance in the field of artificial intelligence.
However, adaptability and dynamic transparency have their own limits due to the increasing complexity of automation systems, which in turn leads to higher levels of uncertainty and more autonomous systems. It can be predicted that internal activities will become more complex and perhaps even less transparent with AI.
Therefore, caution is needed, and one should avoid the “seeing-through” approach for all decision-making of fully autonomous agents. The outcome and benefit for the operator could be negative if the explanation becomes impossible to understand (too complex) or too distant from what the operator can grasp in the real-time dynamics of the action.
An alternative, possibly more realistic approach could be to limit the ambition for transparency to prepared levels of detail and context within a well-defined design domain. The term “operational design domain” is often used to describe the conditions under which an autonomous system operates safely (Lee et al., 2017).
Commentary by Éric Marsden, Program Manager at Foncsi
As René Amalberti points out, this literature review partially covers academic works on automation transparency. One initial community focused on the design of computer systems and their interfaces, while a second community was more concerned with what happens on the other side of the screen, with the humans interacting with these automated systems.
This literature review mainly addresses works on design, particularly those conducted after 2017 (although a handful of articles from the early reflections on this topic in the 1980s are also mentioned). This is partly explained by the chosen methodology, which only includes articles that use the terms “automation transparency” or “transparency”, while the cognitive science research community tended to use the terms “situation awareness” and “sensemaking”.
The discussion also focuses on a narrow view of automation, in which a human operator interacts with a single automated system. A series of works from the 2000s on computer systems seen as agents that cooperate, and are used by collectives rather than individuals (such as works on the concept of “distributed situation awareness”), are not well addressed in this review, although these topics are increasingly important today.
For an overview of these works, see, for example:
> Klein, Woods et al (2004) Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity, IEEE Intelligent Systems