Friday, February 23, 2024
HomeTrendingAI's "black box problem"

AI’s “black box problem”


A black box is a concept of a device or mechanism whose internal structure cannot be seen. As the name suggests, the black box is likened to the fact that you don’t know what’s inside.

black box
black box

AI (artificial intelligence) is trying to go beyond human judgment and thinking through deep learning. Advances in this technology are expected to have further development and applicability, but at the same time, they have created anxiety about “Is AI credible?” In this article, we will introduce possible cases, points to note regarding AI utilization announced by the Ministry of Internal Affairs and Communications, and the reliability of AI that will be required in the future, regarding one of the problems that AI has the “black box problem.”

Meaning and strengths of black box

The internal structure cannot be seen

A black box is a concept of a device or mechanism whose internal structure cannot be seen. As the name suggests, the black box is likened to the fact that you don’t know what’s inside. Even if you do not know the internal structure, you can get the appropriate output result if you know how to operate the device. However, since we do not know the internal structure, we cannot know what kind of logic was used to derive the output result.

Conversely, devices and mechanisms that can see the internal structure are called white boxes, and we focus on the internal structure such as “whether the internal logic is correct” and “whether the program is written correctly”. The appearance of understanding the internal state is likened to a white box.

Strong against external software specifications

One of the strengths of black boxes is that they are strong against external software specifications. What you should pay attention to in the black box are the input and output results. For example, if you are testing a device that returns an input of A and a result of B, you can achieve your goal if you can confirm that the output is B for input A.

It doesn’t matter what process the device went through in the process to output the result B. It doesn’t consider the internal structure, so you can focus on what you get (whether it works according to the external specifications). Of course, it works according to the specifications, but we can also perform usability-conscious tests such as “whether it meets the user’s request” and “whether there is a problem with usability”.

There is no problem if it works according to the external specifications, but if a serious problem due to an internal logic error becomes apparent, it may take time to analyze because it is necessary to understand the internal structure.

What is the “black box problem” of concern?

“I don’t understand the thinking circuit of AI (artificial intelligence) why I answered.” This is the black box problem that AI utilization faces. In conventional AI, humans set the rules for their thinking and judgment criteria.

However, with the acceleration of AI research, neural networks, and deep learning, AI has become an “intelligence” that creates its judgment criteria in recent years.

A neural network is a learning method that artificially models neurons, which are human cranial nerves. Deep learning is a multi-layered structure of this neural network, which makes it possible to automatically learn a large amount of data such as images, texts, and sounds and recognize them with high accuracy.

The standard that AI needs to judge things is “weighting”. AI recognizes the strength of neural network connections, that is, the number of connections, as the “weight” of data. As a result, AI concludes by comparing the weights of the data related to A and the weights of the data related to B in situations such as “Which of A or B should I choose?” However, this “weight” processed by deep learning has a huge processing process, so it is very difficult for humans to understand a clear “weighting” standard.

In addition, AI has a processing speed that is incomparable to the speed at which humans think and judge things, so the amount of information judged by AI is enormous. Therefore, it is almost impossible for humans to trace this amount of information.

In this way, AI uses “weighting” based on the “amount of information” for judgment, but humans cannot understand this, and AI’s judgment criteria have become a black box. Simply put, AI’s thinking process is incomprehensible to humans. This is the “black box problem” of AI.

Cases that matter

Depending on the industry that utilizes AI, it may not be necessary to know the criteria for AI. For example, services that perform image analysis and image judgment place importance on analysis/judgment results, so even if AI thinking/judgment is black-boxed, it does not matter so much.

black box
black box

However, when AI is involved in “a person’s life or life” such as medical diagnosis or autonomous driving, or when it is used for “influencing a person’s life” or “distinguishing a person” such as selecting a person, “AI” The criteria and thinking process of “why did you reach that conclusion” is emphasized.

If the conclusions made by AI make “mistakes” physically and ethically, if the thinking process of AI is not understood, it may affect people’s lives, lives, and lives, and make wrong decisions. It becomes difficult to improve a program that has logic to do.

Medical diagnosis by AI

When utilizing AI, one of the fields where we should be especially aware of the black box problem in the medical field.

As an example, consider the case where the selection of organ transplant subjects was performed by AI. In this case, of course, AI will select “people suitable for this transplant” based on the information obtained by deep learning.

In such a case, it is assumed that the target person who was not selected will ask the reason for “why was not selected”. However, because the grounds selected by AI are in the so-called “black box” and humans cannot understand, human doctors can only answer to patients “the result of AI’s choice.”

Certainly, with AI, it is thought that accurate judgments are made based on data without personal feelings, but in judgments that affect the life or death of a person, “I do not know the reason” is not enough. Hmm.

Such “inability to explain the basis of diagnosis in the medical field” is a concern as one of the black box problems of AI.

Autonomous driving by AI

Currently, AI judgment is beginning to be incorporated into the automatic driving of automobiles. For the car to move safely, AI decides the judgments and actions that will not cause an accident. However, there are concerns about the black box problem in this respect as well.

If AI’s autonomous driving does not cause any traffic accidents, there should be no problem. However, in the unlikely event of a car accident, it will be necessary to clarify “what actions and judgments AI took when the accident occurred”. This is because it is necessary to clarify the cause of the accident, the necessity of improving the AI ​​used for autonomous driving, and the responsibility for the accident.

However, I do not know the process that led to the accident due to the AI ​​black box problem. Therefore, it becomes difficult to improve the judgment of AI used for autonomous driving, and it becomes impossible to reduce the risks associated with autonomous driving.

Discrimination in recruiting by AI

AI obtains a huge amount of information through deep learning to make decisions, and humans make decisions based on AI’s decisions. And, human behavior based on such AI judgment is performed on the premise that “AI makes judgments based on a huge amount of data, so the optimum solution is selected from an objective viewpoint.” increase.

However, there are cases where biased judgments are made depending on the data used for machine learning. As mentioned in the medical example, the black-box nature of AI is regarded as a problem, especially when biased judgments are made in the field of utilizing AI for “human judgment”.

One example of concern about the AI ​​black box problem in this “human judgment” is “recruitment work utilizing AI”. As mentioned above, AI decisions are based on a huge amount of machine-learned data, so AI decisions can be affected by the bias of “data during machine learning.”

There are cases where companies that had a strong tendency to hire men in the past learned the hiring data from AI, but the hiring criteria were biased and the result was that it was difficult to hire women. After all, due to the black box problem that AI has, it is difficult to explain judgment criteria and processes and improve judgment by AI, and there is a possibility that you will not notice biased results.

In this way, when AI is used to select and judge “people” such as hiring, there is a concern not only about the black box problem but also about creating discrimination without noticing it.

Required AI reliability

AI is expected to be deployed in various services, but there are concerns about problems associated with its utilization, and the black box problem mentioned above is one of them. Under these circumstances, there are movements to consider and decide rules and principles for the utilization of AI on a national and global scale.

Similar movements have been seen in Japan, and in 2018, the Ministry of Internal Affairs and Communications announced the “AI Utilization Principles”. According to this draft principle, the following 10 items are listed as “matters that are expected to be noted in the utilization of AI”.

  1. Principle of proper use
  2. Principles of proper learning
  3. Principle of cooperation
  4. Safety principles
  5. Security principles
  6. Privacy principles
  7. Principle of dignity and autonomy
  8. The principle of fairness
  9. The principle of transparency
  10. Accountability principles

Source: Ministry of Internal Affairs and Communications Information and Communication Policy Research Institute “AI Utilization Principles (July 31, 2018)”

Of the above, the principles of ⑧ Fairness, ⑨ Transparency, and ⑩ Accountability are said to be mainly related to “building trust” in AI. It is assumed that this kind of “trust” is required for AI, which may have a significant impact on users depending on how it is used, such as in the medical field, automated driving, and personnel decisions. Because.

To solve the black box problem

To solve the black box problem, it is necessary to prepare a mechanism or mechanism to compensate for the shortcomings of the black box. As mentioned above, the black box is not aware of the internal structure, but there is a white box as a mechanism that focuses on the internal structure.

The white box that shows the internal structure

White-box testing is a concept that focuses on the internal structure of equipment and mechanisms. Test whether the logic and control are correct for each module or program code. Due to its characteristics, it is a test used in unit tests rather than system tests. Knowledge of programming is indispensable due to the characteristics of focusing on the internal structure. Therefore, it is common for the programmer who implemented it to do it.

Specifically, the program is executed as designed with a “control flow test” that causes the program to read some conditional routes and confirms whether the correct processing judgment is made according to the situation and the intended route is passed. There is a “data flow test” to check whether data processing and variable conversion processing are executed as instructed by the program.

Inferior to the black box

The inferiority of white-box testing compared to black-boxing is that external verification is less pronounced. White-box testing is often performed mainly for each function and is often performed independently by the programmer who also implemented the person in charge. In that case, the operation of the entire system cannot be completely covered.

Therefore, it is common to cover the operation check of the entire system with a black-box test that focuses on the input and output results. The white-box test is programmer-oriented because it is a test to confirm that the program works as designed.

The black box test is a user-friendly test because it confirms the appearance of the UI, the validity of the output results, and the operability (usability) such as ease of use. Since the viewpoints of testing are different, both tests must be performed to meet the test conditions.

Gray box in the middle

A gray box is a concept that has the characteristics of both a white box and a black box. We also test the validity of the output result based on the external specifications while focusing on the internal structure.

The problem with system testing is that it takes a huge amount of man-hours. The reason why the test man-hours are enormous is that we do not know the internal structure, so we cannot grasp where the defect is lurking, so we test all the functions.

However, in the full-function test, even the functions that are known to have no defects at the design stage are tested, so so to speak, “useless tests” occur. In addition, if the internal structure is not known, the discovery of serious defects will be delayed, and a large number of man-hours will be required to correct them.

Since the gray box test is performed based on the external specifications while grasping the internal operation, it is possible to perform the test more accurately and efficiently than the test with the black box alone.

Disadvantages of gray box testing

It is a gray box that has the advantages of a white box and a black box, but it also has disadvantages.

First, the code coverage is lower than the white-box test. This is because the test is inevitably less comprehensive. After all, both the black box and the white box are simplified for testing. Testing against internal structures and external specifications is a challenge that faces at the same time, so it is not possible to completely replace them with gray box testing.

In addition, it requires a high level of technical knowledge and skills to cover the test range of the white box test, so it is a test that selects a tester. As a countermeasure, it is easier to perform the targeted test by inputting the internal structure from the program implementer to the tester. Alternatively, the programmer may perform the gray box test by himself, but in this case, the verification against the external specification becomes thin.

Gray box testing requires more test man-hours than each box test alone to test the target item both internally and externally.

in conclusion

If you want to test the internal structure, it is appropriate for a program writer who is familiar with the internal structure to perform a white-box test, but that alone does not cover the range of tests that affect the entire system.

On the other hand, since the black-box test confirms the output result for the input from the outside, it is possible to test the external specifications that could not be covered by the white box test, but it is not possible to test the internal structure. White-box testing and black-box testing are not just about doing one or the other.

It is important to incorporate it into the test according to the application so that you can take advantage of both. It is also important to consider in advance how much testing should be done in consideration of man-hours and cost-effectiveness. If necessary, you should also consider testing inside and outside in parallel, such as gray box testing.

Further technological development is required to completely solve the AI ​​black box problem, but if the gray box test can be developed and the factors underlying deep learning and the output results can be linked, it will be a clue to the solution.

If you ever want to know about similar things, check out the Facebook page Maga Techs.



Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments