How Reliable Is AI? Breaking Down the Black Box Model Problem

Superb AI Inc. company logo

Superb AI

2023/6/27 | 3 Min
Breaking Down the Black Box Model Problem

Artificial intelligence is connecting the lives of 6 billion people around the world in a way that has never been seen before. For example, it’s very likely that the YouTube recommendation video you watched today doesn’t reflect your true taste.

YouTube's video recommendation system works by closely analyzing not only one’s personal information and search history, but also what videos people of similar age, gender, and occupation will be interested in around the world, and recommending options based on that context. Knowing that, we can think of ourselves as data suppliers to a larger system called the artificial intelligence model, and share each other's tastes without knowing it.


Can You Rely on AI? 

When artificial intelligence is adopted into areas that can have a significant impact on many lives, such as healthcare, law, and politics, those industries or areas change completely. For example, suppose you have an artificial intelligence model that predicts who will commit a crime based on certain conditions, such as race, gender, residence, and past criminal history. 

As users, we do not know on what data and grounds the AI model predicted potential criminals. This goes against the purpose of the law to prevent a single innocent person from occurring through the principle of presumption of innocence, and can be a direct challenge to human right to know.


A Slow Introduction to AI

As described above, the level of introduction of artificial intelligence is still insignificant compared to other areas in areas such as law and politics where social consensus on ethical and moral issues is important. 


However, if the role of the state and government becomes ambiguous and questions begin to arise about the role and replaceability of humans in terms of rationality and convenience, the dissemination of artificial intelligence in the public and political sectors may only be a matter of time.


AI and Democracy

Yuval Harari talks about an AI-based society where everything is tightly controlled in his book Homodeus. According to his argument, artificial intelligence may not be suitable for democracy that values individual human rights and the right to know. 

Rather, the artificial intelligence-led society he imagines resembles a dictatorship dominated by thorough control and supervision. Whether or not artificial intelligence developed by humans becomes a weapon that suppresses human life is important whether "human beings can control artificial intelligence."


Controlling AI: An Urgent Need 

Therefore, before it is too late, we must recognize the potential threat to artificial intelligence and begin efforts to bring control. Above all, it may be more urgent and important than expected for us to think and discuss coexistence with artificial intelligence as we live in a democratic system where human rights and suffrage of each citizen are key according to each member of society. 


Computer vision tools help with controlling AI

The Development Process of Artificial Intelligence

Although many people overlook it, artificial intelligence has not developed with a clear purpose and direction under the supervision of a reliable institution. 


The Impact of AI Integration

Since the first appearance of the concept of artificial intelligence that can infer and solve problems like humans by Professor John McCarthy at the Dartmouth Conference in 1956, artificial intelligence has developed with the power of statistics and computer science.


Since then, artificial intelligence has faced technical limitations in the 1970s and entered an ice age, and has reached its second heyday with the emergence of Deep Learning, which solves the XOR problem as a hidden layer. 


The Need to Control AI

Recently, with the development of cloud computing technology and GPU, it has become possible to store and analyze a huge amount of data, and it has been used in various ways, including images, text, and voice, as well as generative AI.

In the process of this development, a variety of techniques emerge, ranging from supervised learning using training data to unsupervised learning that finds characteristics and patterns that humans have not found in large amounts of big data and reinforced learning that induces them to find patterns on their own through rewards. 


Coexisting with AI 

The problem is that the development of these artificial intelligence technologies only reflects human desire to create and utilize "machine that learns and judges on its own," and has been made without sufficient consideration of the operating process and the influence it will cause. This development of artificial intelligence has created a problem called the 'black box model.’



A diagram depicting a conceptual black box system/model with input and output functionality.
A diagram depicting a conceptual black box system/model with input and output functionality.




Defining Explainable AI

Artificial intelligence is capable of learning from a vast amount of data, far surpassing human capabilities, and independently identifying patterns. It can even analyze and predict protein molecular structures, a challenge that has occupied the biological community for decades.


The Black Box Model

Despite the AI's impressive capabilities, its internal workings often remain inscrutable to us. This is due to the intricate process that AI creates through hundreds of millions of parameters and artificial neural networks (ANNs), which surpasses human comprehension. 


This type of AI, where the internal process is not transparent, is known as the 'black box' model. With data volumes increasing exponentially and computational power advancing significantly, most of the AI models we encounter in our daily lives correspond to this 'black box' model.


The Concept of Explainable AI

In contrast to the black box model, there exists a type of AI known as Explainable AI. In this model, one can infer how the input variables affect the analysis process and results.


The Importance of Explainable AI

As AI gets introduced and utilized in sensitive and important areas such as politics, law, and healthcare, it is crucial to strive for the use of 'explainable' AI models. These models provide transparency and understanding, which are essential when making critical decisions in these sectors.


Data Bias

Most artificial intelligence model development goes through the process of training, verification, and testing. There are also unsupervised learning models in which artificial intelligence finds patterns and similarities on its own without human intervention in vast amounts of data, but supervised learning and reinforcement learning still require human intervention. This means that there is still a risk that 'human bias' will be reflected in artificial intelligence in terms of labeling and selection of training data.

For example, in multiracial and multicultural countries such as the United States, data on the mainstream (majority), which inevitably accounts for the majority of a society, is more than data on minorities. This bias can be reflected in the learning results of artificial intelligence, resulting in only favorable mainstream results. In other words, we have the risk of creating another prejudiced or biased AI application.  



Striking a Balance 

The reliability of AI and its adoption into our society pose significant challenges, especially given the opaque nature of black box models. The widespread use of AI technologies presents issues related to data bias, lack of transparency, and potential infringement on human rights. Concerns are raised in fields such as law, politics, healthcare, where AI could potentially create more harm than good. 


These complications have underscored the need for "explainable AI," wherein the decision-making processes are transparent and traceable. Furthermore, the introduction of AI technologies must be done with careful thought towards eliminating human bias from AI training processes to prevent reinforcement of societal biases. 

As artificial intelligence continues to permeate every aspect of our lives, it becomes ever more critical to consider the ethical implications and strive for a balance between the benefits AI offers and the preservation of democratic principles and human rights. 

Subscribe to our newsletter

Stay updated latest MLOps news and our product releases

About Superb AI

Superb AI is an enterprise-level training data platform that is reinventing the way ML teams manage and deliver training data within organizations. Launched in 2018, the Superb AI Suite provides a unique blend of automation, collaboration and plug-and-play modularity, helping teams drastically reduce the time it takes to prepare high quality training datasets. If you want to experience the transformation, sign up for free today.

Join The Ground Truth Community

The Ground Truth is a community newsletter featuring computer vision news, research, learning resources, MLOps, best practices, events, podcasts, and much more. Read The Ground Truth now.

home_ground_truth

Designed for Data-Centric Teams

We’ve built a platform for everyone involved in the journey from training to production - from data scientists and engineers to ML engineers, product leaders, labelers, and everyone in between. Get started today for free and see just how much faster you can go from ideation to precision models.