Date_en
December 2024

La maîtrise des situations dynamiques


auteur
Author(s):
author

René Amalberti

référence
Reference:
référence_en

Amalberti, R. (2001). La maîtrise des situations dynamiques. Psychologie Française, 46(2), 105-117

Our opinion

stars_en
5
opinion

Here is a summary of a foundational article on theoretical contributions to individual risk-taking by René Amalberti. This article, originally published in the journal Psychologie française without an English translation but later included in the author’s books in multiple languages, summarizes the findings accumulated by the author during the 1980s and 1990s on the management of high-risk systems, based on research conducted notably on fighter pilots.
The key concepts are “compromise” and “sufficiency” in understanding individual risk management. These concepts have since been generalized and applied to all aspects of risk management: collective, organizational, and strategic.

By focusing too much on errors and failures, lasting ambiguities have emerged in operator models. For twenty years, the structuring role of error in problem-solving was underestimated. Additionally, the accumulation of research showing that operators make many errors but recover from most of them was largely overlooked.
It took a more favorable context and an increase in field studies in highly complex situations in the late 1980s to shift the dominant way of thinking. This article is one of the foundations of that change..

In just a few years, there was a profound revision of the concept of cognitive optimality. Cognitive optimality should no longer be understood in terms of minimizing waste—especially instantaneous waste—such as avoiding all errors and failures, minimizing response time, maximizing comprehension, and immediately correcting detected defects. This hypothesis had dominated experimental protocols, instructions, and all safety-related disciplines for decades.
Instead, cognitive optimality should be viewed in terms of compromise, enabling a dynamic achievement of objectives (or rather, multiple objectives) with sufficient performance.
 

Three Key Ideas in This Theoretical Revision

  • The Concept of “Sufficiency”

This concept is often misunderstood as being minimalist (i.e., aiming for the lowest cognitive cost). However, it should instead be understood as an adaptive response to the environment, providing subjective satisfaction to the person performing the task, based on their goals, the context, and their expertise. The notion of sufficiency is reconsidered with each execution and is not contradictory to high performance and a high cognitive cost.

  • “Dynamic Adaptation”

Performance fluctuates significantly over time, yet the overall response remains acceptable within the given timeframe. The available time and the target deadlines are the key metrics for evaluating cognitive performance—not the performance at every single moment before the deadline is reached.
Errors ultimately turn out to be the price to pay for a well-controlled compromise and are often secondary variables in mastering a situation.

  • “Metacognition” (Understanding One’s Own Abilities)

Metacognition allows individuals to calibrate their management of acceptable and accepted risks, particularly regarding the initial performance contract.

 

Time as a Cognitive Management Tool

After sufficiency, time is surely the second most misunderstood major cognitive value in the psychological literature on reliability. Experimental logic has often regarded it merely as a measure of intellectual capacity (reaction or response time).
It was only in the 1980s that time once again became a subject of study in its own right, rather than just a measurement tool.

Time serves as a cognitive management tool in two ways
  • On the one hand, it is embedded in the very representation of activity and acts as a temporal indicator for organizing work. Operators are constantly exposed to vastly different time scales in their tasks: some operate on a scale of seconds, others on a scale of months. They often rely on maximum deadlines as reference points to structure their shared-time activities. These multiple deadlines can sometimes be misleading, but in the vast majority of cases, operators manage these parallel time systems effectively, using them as natural markers for distributing their activities throughout the day. 
     
  • On the other hand, time is a driver of world transformation and has an inherent potential for problem-solving and error correction. Since situations are dynamic, a problem at one moment is generally not the same problem at the next. Doing nothing can sometimes resolve many difficult. 

The same applies to error management: time is a valuable tool for identifying mistakes and even mitigating their consequences. This property of time is explored in greater depth in the following section, as it forms the foundation of the ecological regulation of risks. 
 

Time Regulation in Understanding

Recurring results show that when given a choice, operators prefer to act rather than to understand, as action facilitates comprehension.

Time Regulation in Error Management

Another series of recent studies highlights that error control is just one variable among many and, ultimately, a weak predictor of situational mastery.

Subjects produce a fairly constant flow of 1 to 3 errors per hour, regardless of their level of expertise—except for absolute beginners. Paradoxically, the number of errors tends to decrease in more demanding situations (due to heightened cognitive control). However, in these cases, the recovery rate also tends to collapse (as there are insufficient resources for both real-time control and recovery). The risk of losing control correlates more strongly with a declining recovery rate than with an increasing error rate.

Error flow is controlled in two ways. Between 70% and 80% of errors are detected, with higher detection rates for routine errors. Most importantly, expert subjects increasingly “let go” of errors that have no immediate impact on the task at hand because they know they can still manage the situation—an outcome of their metacognitive abilities.

 

The Theory of “Ecological Safety”

The previously discussed findings have served as the foundation for a theory of “ecological safety”, where “ecological” refers to a native, spontaneous theory of human risk management behaviors.

Key Points of This Theory:

Mastery of a situation requires dual supervision: one over the external physical process and another over the mental process. The primary function of cognitive activities within the conscious field is to ensure mental supervision. When this is well-managed, supervision of the physical process can be handled at a relatively routine level, relying on highly proceduralized skills. In short, when mastery is achieved, physical process supervision is largely automated in normal situations, while situation mastery (internal supervision) remains a highly symbolic activity. This process operates with a constant brake to avoid falling into local optima (perfect understanding, perfect action) that become disconnected from the constraints of the physical process.

Different Cognitive Mechanisms Ensure Safety Within Each Supervision

For external supervision, routines incorporate their own control mechanisms. These controls have a relatively late activation threshold, requiring a significant drift in physical process values before triggering (often automatically) a corrective routine. The greater the drift, the easier and more routine the correction—provided it remains within the scope of known corrective skills. Conversely, when drift is subtle and less noticeable, correction requires more time, resources, and engagement of internal supervision, necessitating a non-routine problem-solving approach. This property is leveraged by cognition for short-term (tactical) control of the physical process—continuous, cost-effective, and efficient—using time as a priority arbiter and a buffer for emerging issues.

 
For internal supervision of the mental process, symbolic activities are managed to coordinate the process effectively. However, this supervision must also be economical, making the best use of arbitration to achieve a level of “sufficiency” compatible with available cognitive resources. Not every doubtful point can be fully understood, and time constraints rarely allow for exploring all known solutions. Engaging with manageable risk experiences becomes a constant tool of cognitive management.
Similar to external supervision, but through a different mechanism, cognitive tactical control relies on the remaining time before deadlines and on the system's turbulent limits—those marked by signals indicating the imminent loss of control. These signals reflect internal supervision difficulties: too many errors, excessive time spent detecting errors, accumulating self-censorship due to lack of time and resources (while being certain that a bit more time would enable understanding), and an overwhelming number of pending actions. Through experience and learning, these signals arise well before actual loss of control, appearing at the first signs of difficulty (a concept known as “margin”). Their emergence prompts a shift in strategy and control mode, typically involving a revision of the original objectives.
In short, cognition can only effectively manage its internal and external risks by engaging with them. Preventing operators from experiencing these risks firsthand is both a psychological and ergonomic fallacy.

 

Alongside the arbitration rules between the two competing supervisions, metacognition (the awareness of one's own abilities) is the other key variable in successfully mastering a situation.


At any given moment, the cognitive copy is like an unfinished task. The individual is aware that they have not fully understood everything, have not done everything they should have, and have made errors that have yet to be corrected. This awareness of the “unfinished” state organizes cognitive priorities and often explains deviations in behavior—whose sole purpose is to buy more time to catch up on delays.
This notion of an unfinished draft creates significant challenges in system design and support tool integration. Assistance systems tend to be overly directive in their immediate correction of errors, which, despite good intentions, can severely disrupt the dynamic management of risks.
Once again, forcing an operator to work in a state of constant optimal performance is a psychological and ergonomic fallacy.
 

Conclusion

Ultimately, given these highly dynamic characteristics, situational mastery manifests through several practical paradoxes.
A strong sense of control in supervision is often accompanied by imperfect real-time performance. However, the key factor is the awareness that at least one viable solution (and preferably an alternative) exists—whether through individual or collective expertise. The error rate remains relatively high, understanding is limited, and guidance largely relies on reactive routines responding to environmental affordances.
Paradoxically, when this sense of mastery fades, the operator quickly enters a state of cognitive overload. This is reflected in a reduction of behavioral “waste”: fewer errors, stricter adherence to the nominal solution they believed to be effective, reduced parallel activity (such as personal thoughts), and an intense focus on finding an alternative solution.
When mastery is completely lost, the operator often retreats into a subset of the problem that they still understand and can manage without making mistakes. However, the rest of the situation—and the final outcome of the problem—is abandoned, potentially left to the collective or handed over to an automated system by default.

 


Commentary by Corinne Bieder, Scientific Director of Foncsi

Adaptation, dynamic goal achievement, and compromise—these are recurring themes in discussions about how organizations manage safety. Could the findings established nearly 25 years ago by René Amalberti on individual risk-system management inspire new reflections on how high-risk industries approach safety at an organizational level?

Without drawing premature conclusions or making overly simplistic analogies, this article invites parallels that raise important contemporary questions. High-risk industries must juggle multiple objectives—production, safety, environmental impact, security—seeking to balance them dynamically rather than optimizing each one individually, aiming instead for sufficiency across the whole system.


At the individual level, sufficiency emerges from a tension between external demands (objective, physical risks) and internal processes (subjective risks, such as the fear of no longer knowing how to act). Here, metacognition—one’s awareness of their own abilities—plays a crucial role. Could an equivalent concept of “internal risk” be introduced at an organizational level? To what extent, and by what means, do organizations—or could they—develop a dynamic understanding of their own competencies and confront this perception with external demands?


Additionally, the author highlights time as a cognitive management tool, emphasizing that operators prioritize reaching objectives within a given timeframe rather than engaging in continuous, real-time error correction. This observation is particularly thought-provoking in today’s context, where performance monitoring has become increasingly continuous at all levels, driven in part by the digitalization of a growing number of activities.