Article Review Undergraduate 2,864 words Human Written

Reducing Risk of Human Error in Aviation

Last reviewed: ~14 min read Technology › Aviation
80% visible
Read full paper →
Paper Overview

A Machine Learning Approach to Predicting Fatalities in Aviation Accidents Introduction The aviation industry has come a long way in terms of technological advancements, yet it also continues to grapple with safety concerns, particularly those stemming from human factors. The article by Nogueira et al. (2023) entitled \\\"A Machine Learning Approach to Predicting...

Full Paper Example 2,864 words · 80% shown · Sign up to read all

A Machine Learning Approach to Predicting Fatalities in Aviation Accidents

Introduction

The aviation industry has come a long way in terms of technological advancements, yet it also continues to grapple with safety concerns, particularly those stemming from human factors. The article by Nogueira et al. (2023) entitled "A Machine Learning Approach to Predicting Fatalities in Aviation Accidents: An Examination" gives a new perspective on the matter by suggesting that machine learning can serve as a tool to better understand and predict accidents. This review critically examines the paper's assertions, and argues that even though machine learning does show some promise and utility, its application in context of accident prevention is not without certain challenges. This review will present evidence supporting this stance, discuss the paper's context, and address contradictory evidence.

Discussion

Contextual Limitations

Nogueira et al. (2023) situate their paper within a specific context. That context includes the aviation industry's dynamics, the advancements made by machine learning, and the data sources available at the time of writing. These contextual factors serve to shape and influence the study’s conclusions. Understanding context helps one to see and recognize that the paper's findings might be constrained by the industrial, technological and data source limitations of its time.

This is not to say that the industry, the technology, and the data used are causes of concern in and of themselves. One cannot do more than use what is available or given. However, the industry is something that still depends to an extent on end-user (human) involvement and decision-making at multiple levels (from policy to product development to risk management, quality control, and services). Human agency remains an integral component from start to finish, and regardless of how one might like to apply machine learning to the subject at hand, the fact remains that machine learning cannot compensate for every aspect of human agency in the total picture.

Supporting the Promise of Machine Learning

Nonetheless, machine learning is presented as having some value and utility in the study, and this presentation is supported by academic evidence, both in the article itself and in the wider academic world (Gui et al., 2019; Nogueira et al., 2023). Machine learning certainly does have positives and real-world use cases (Brink et al., 2016).

For example, machine learning has the ability to process enormous amounts of data and identify patterns in the data that gives it predictive power. This ability is the reason Nogueira et al. (2023) view it as having the potential to be of assistance in reducing safety risks caused by human error in aviation. The article focuses on the Multilayer Perceptron (MLP) and Random Forest (RF) models and their performance metrics to show this potential. Along with that, the authors explore how Active Learning (AL) scenarios support the adaptability of machine learning models, even in data-scarce scenarios that could still be useful in aviation (Nogueira et al., 2023).

Challenges and Contradictions

The emphasis of Nogueira et al.'s (2023) paper on machine learning as a solution to predicting human behavior in aviation accidents is certainly an ambitious and laudable approach to addressing the problems of safety and security in the field of aviation. However, the real-world application of such models presents a number of challenges that cannot be overlooked or 100% solved by the application of machine learning tools; thus, researchers continue to suggest alternative solutions (Chen et al., 2019). Many of these problems and challenges can be discussed at length, starting with the challenge of predicting human behavior in intense situations.

Predicting Human Behavior

Human responses are inherently complex; they are affected by emotion as much as they are by logic. Patterns may be present in pre-existing data, but projections of future behavior cannot be entirely accurate based on such patterns, for there will always be some leeway, extraneous and surprising factors that go unaccounted for, and so on (Qui et al., 2022). This is especially likely to be the case in any situation characterized by an intense environment, such as aviation emergencies, for it is precisely this kind of life-or-death situation that can easily defy the predictive power of algorithms (Osoba et al., 2017). Algorithms can clearly be designed to identify patterns from past data, but they may falter when faced with situations that they have not previously encountered or that differ in any fundamental way to the data that they have received (Osoba et al., 2017). Moreover, how they are designed in the first place can predicate how they interpret data (Osoba et al., 2017).

The unpredictability of such systems can also be examined from the different factors influencing human decisions. Emotions can vary widely among different kinds of people, depending on various factors such as gender, culture, age, and so on; and yet these factors too can be deeply impacted by immediate circumstances and past experiences. Added to this is, as Hermstruwer (2020) points out, the problem of “the generalizability of machine-based outcomes, counterfactual reasoning, error weighting, the proportionality principle, the risk of gaming and decisions under complex constraints” (p. 199). Each of these must be addressed in order for machine learning to really be fundamentally sound in terms of predicting human behavior.

For example, a pilot's immediate reaction to an unexpected event might be shaped by a past experience, a recent conversation, or even their physical state at that moment. Or, decisions that are made in the blink of an eye may be made in accordance with the actor’s best or worst instincts rather than conscious thought. All of this adds layer upon layer of unpredictability. Personal experiences, cultural backgrounds, and individual training may also lead to different responses in similar situations. Thus, algorithms can offer valuable insights into generalized takes on situations, but the complexity of human behavior and aciton, especially in high-stakes/high-pressure situations, where a variety of different people may react differently, definitely poses a problem for predictive modeling.

Limitations of Algorithmic Predictions

At the heart of the matter, as it pertains to the article by Nogueira et al. (2023) is the belief that machine learning can account for all the numerous and seemingly infinite number of inputs that may or may not impact human behavior. Human thinking and decision-making are not the same as predictable, mechanical sequences that can be measured just as accurately as timing on a belt or gears in a machine. Human behavior is affected by both different amounts of logical reasoning and emotional impulses. Algorithms are designed to process information and identify patterns based on logical analysis of datasets, but accounting for the unpredictability of human nature is impossible in every scenario. One person's decision might be swayed by a recent emotional experience, a gut feeling, a whim, a moment of inspiration (for which there is no scientific explanation), or even cultural influences that are difficult to quantify. There are so many intangible factors that can play a role in determining human choices, that predicting these choices with 100% accuracy can be a goal that eludes even the most sophisticated algorithms and models.

The authors argue that these machine learning algorithms can yield profound insights into the interplay between human factors and aviation accidents when they are used with comprehensive datasets (Nogueira et al., 2023). Their argument is supported by the models’ performance, particularly the RF and AL algorithms, which showed substantial predictive power in various scenarios. However, the authors also acknowledge the limitations of their approach. They concede that their models were somewhat constrained by a lack of data for accidents that resulted in fatalities. This, in fact, limited the algorithms’ potential to yield more precise predictions (Nogueira et al., 2023). Moreover, this is not a small concession: it is a major limitation. The authors propose that integrating more recent and larger datasets could markedly enhance the performance of the models, but this is an untested hypothesis—not the basis for drastic policy change, not even remotely.

The paper also asserts the relevance and importance of the Human Factors Analysis and Classification System (HFACS) taxonomy in comprehending the human factors contributing to aviation accidents. They correlate this taxonomy with their models’ outputs, and in doing so argue that they are able to pinpoint the critical human factor causes that lead to fatal accidents (Nogueira et al., 2023). Their conclusion is that this insight can be used to direct investment and refinement in safety protocols to address these specific areas. However, applying this insight still requires human decision-making and oversight. At the end of it all, the human end-users are still responsible for acting in some capacity on services rendered in aviation. There is no end to the extent to which this fact will remain.

Thus, the authors firmly believe in the potential of machine learning models to enhance understanding of human factors in aviation safety. But they also admit that the effective use of these models is somewhat idealistic in that it will require access to comprehensive, high-quality datasets and a solid framework for interpreting the results (like the HFACS taxonomy). The problem therefore remains that machine learning offers valuable tools for understanding broad patterns and tendencies, but it may not always capture the depth and unpredictability of individual human actions and decisions. A look at the HFACS taxonomy must now be considered.

HFACS Taxonomy Limitations

The HFACS taxonomy does offer a structured approach to categorizing human errors, but its application in the paper may also be overly reductionist. Human errors, especially in the aviation sector, often result from a confluence of factors - both internal (like fatigue or stress) and external (like equipment malfunction or environmental conditions). Any attempt to box these errors into predefined categories poses a risk of oversimplification of the nuance and complexity that goes into human decision-making. Human beings are not robots and do not act according to pre-programmed mechanics. Therefore, relying on pre-programmed mechanics to anticipate or predict human behavior will always be somewhat short-sighted.

"Black Box" Concern: Transparency Issues in Machine Learning Models

One of the prominent concerns with machine learning and deep learning models is their inherent "black box" characteristic (Rudin, 2019). This term refers to the model's ability to process input data and produce outcomes without revealing the processes or logic behind its decisions. In critical sectors like aviation, where decisions can have life-altering consequences, the hidden nature of the model’s black box and the ambiguity that arises from how it conducts its systematic analysis, can and should cause one to question its predictive power. If that process cannot be seen, studied, and understood by third party observers, people are then at the mercy of the machine. And for people who are deeply involved in aviation operations, like pilots or air traffic controllers, such a lack of clarity can lead to skepticism, can lead to fear, can lead to objections, and can even potentially exacerbate an already problematic situation. If for example a machine is alerting a human actor about a specific and real danger, but the human actor resents the machine or does not trust it and therefore ignores or acts against recommendations, serious repercussions could result. The pilot or the air traffic controller might be reluctant to rely on the machine for information or might fail to adapt in the face of a danger that the machine recognizes but that the person fails to anticipate. Transparency could help to reduce the risk of such distrust, but at the same time the lack of transparency shows that simple predictions are not enough in real-world situations. There is also a need for human understanding and respect for human understanding.

Data Concerns

The paper calls for more extensive datasets to enhance model precision. Theoretically, this is acceptable; but practically speaking there are challenges here. The first challenge is an ethical one. Gathering in-depth data on human actions, especially those that might take place in delicate sectors like aviation accidents, treads on thin ice regarding ethics and privacy. The questions that arise include: What methods would be employed to obtain this data? Would individuals be at ease, knowing their behaviors are under such intense scrutiny, especially during critical incidents? How would this impact decision-making at the time? The Hawthorne Effect could very much be a problem in such a situation.

The second challenge is logistical in nature. The aviation industry is very dynamic, with constant advancements and changes happening all the time. Collecting, amassing, storing, and analyzing huge data volumes in such a fluid environment could pose very big and significant operational challenges. There would also be the problem of ensuring data integrity and relevance over time, given the industry's ever-changing nature. These practical concerns highlight the impracticality of applying theoretical benefits in real-world situations where theory would be hard to implement.

Addressing Contradictory Evidence

Central to the article is the proposition that machine learning can effectively forecast outcomes in aviation accidents. The authors' selection of the MLP and RF models was rooted in their demonstrated performance capabilities. The incorporation of AL was also highlighted as a promising tool to refine predictions, especially when grappling with data limitations. The article's reliance on the HFACS taxonomy was also used to show that machine learning could work to solve safety issues.

However, the authors did not go out of their way to address the inherent unpredictability of human actions, which suggests that this unpredictability could still stymie any model's attempts at flawless predictions. Nonetheless, the essence of the article's argument is not about attaining absolute precision with machine learning. Rather, it is centered on augmenting the existing knowledge base and refining the accuracy of predictions. But is this enough? The empirical results showcased in the paper could be viewed as evidence of the potential of machine learning in such circumstances—but only under very limited circumstances.

573 words remaining — Conclusions

You're 80% through this paper

The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.

$1 full access trial
130,000+ paper examples AI writing assistant included Citation generator Cancel anytime
Sources Used in This Paper
source cited in this paper
16 sources cited in this paper
Sign up to view the full reference list — includes live links and archived copies where available.
Cite This Paper
"Reducing Risk Of Human Error In Aviation" (2023, September 16) Retrieved April 22, 2026, from
https://www.paperdue.com/essay/reducing-risk-human-error-aviation-article-review-2179869

Always verify citation format against your institution's current style guide.

80% of this paper shown 573 words remaining