Potential use of Generative AI in risk analysis operations

Risk analysis and countermeasure design are the main tasks in the upstream process related to cybersecurity, and the personnel involved need high skills, and require heavy workload. Regarding the feasibility of generative AI for upstream process, we introduce the results of our verification and the possibility of its use in the future.

Characteristics of Generative AI

The introduction of generative AI is being considered in a variety of situations to automate and increase operation efficiency. As a first step, it is necessary to sort out how compatible the generative AI is with the target operation, and what parts can be entrusted to the generative AI.

Some examples that generative AI is good at are:

  • shortening the time of output creation by having the generative AI create a rough draft and the questioner finalizes it based on the created draft.
  • It is possible to expand ideas based on the knowledge accumulated by the generative AI.

However, there are some disadvantages and problems to generative AI:

  • Work that spans over multiple processes cannot be assigned
  • Speak confidently about mistakes and inappropriate matters (hallucination)

Given these characteristics, the following prerequisites have emerged when using generative AI:

  • Do not require the output of generative AI to be exact in terms of completeness and accuracy
  • The user is able to judge the output quality of generative AI

Based on these information, you need to consider whether generative AI is useful for your work. What do you think about it considering your work?

Figure 1: Work that generative AI is good at and not good at

Current State of Risk Analysis and Countermeasure Design

Upstream tasks related to cybersecurity, such as risk analysis and countermeasure design, are becoming more important as attacks become more sophisticated and damage increases. However, the number of security experts who can perform such tasks is limited, and even such experts require time to perform these tasks.

The main factor is that

  • in addition to technical tasks, complex factors such as legal systems, business requirements, budgets, and schedules are combined, making it impossible to mechanically decide between Pass and Fail.
  • These factors require comprehensive and accurate consideration of each factor.
    Currently, experienced experts conduct final checks of outputs to ensure quality.

Hurdles in Introducing Generative AI to Risk Analysis and Countermeasure Design

In light of the aforementioned situation, risk analysis and countermeasure design involve many tasks that generative AI is not good at, and unfortunately, the hurdle to introducing generative AI is considered high at this point.

In fact, in our verification using OpenAI's ChatGPT-3.5, we compared the case where experts performed risk analysis and countermeasure consideration by themselves with the case where ChatGPT performed risk analysis and countermeasure consideration and the output was confirmed by experts. Even in the latter case, we could not reduce the work time of experts. [*]

In the verification, we decomposed the process of risk analysis and countermeasure design into tasks and measured the effectiveness of reducing the man-hours required to complete each task. For example, when we repeatedly input a prompt to ChatGPT-3.5 to execute the task of selecting the best countermeasure proposal that matches the situation from among multiple countermeasure proposals, it sometimes presented different countermeasures each time there was an input. In addition, when we assigned ChatGPT-3.5 the role of a reviewer to execute the task of having it judge the validity of the worker's judgment, the decisions made by ChatGPT-3.5 do partially depend on how the prompt is built, but it sometimes judged two incompatible judgements as valid.

From these results, we found that even if we entrusted these tasks to ChatGPT-3.5, experts had to verify the validity of the output, and it did not lead to a reduction in work time.

Outlook for Introducing Generative AI to Risk Analysis and Countermeasure Design

So, is it possible to leave risk analysis and countermeasure consideration to generative AI in the future? As mentioned earlier, we found that the time for experts to check the output created by generative AI is currently a bottleneck, so the first step is to reduce the check load. Measures to reduce the check load include:

  • Raising the output quality of generative AI
  • Automating output checking

Both measures essentially require formulating the checking procedures and judgment criteria and having generative AI learn them.

It is possible to specify the checking procedures, for example, whether they are examined from the viewpoints of confidentiality, availability, and integrity, which are the three elements of security, or whether they are examined not only from the technical side but also from the operational, organizational, and management side. However, since there is no firm definition of the judgment criteria, and decisions are made after appropriately selecting various viewpoints, such as the size of the business or the company itself, and the schedule, it is difficult to formulate them. Therefore, unless there is a breakthrough, such as the practical application of Artificial General Intelligence (AGI), which is currently being researched, it is likely that such work will remain manual for the time being.

I mentioned that generative AI is not familiar with risk analysis and countermeasure design, but it is not the case that generative AI is completely incapable of risk analysis and countermeasure design. In fact, when we had ChatGPT-3.5 perform risk analysis, we found parts where ChatGPT-3.5 was able to perform the analysis properly considering the characteristics of the system. For this reason, there may be cases in the future where we only perform a minimal check on the output created by the generative AI for cases in which risks of lies or errors are low and accept the output while being fully aware of the other risks.

Nobuo Idezawa

Nobuo Idezawa

NTT DATA Japan Corporation

After being involved in security-related technology development, data center operation, and security system construction, he is engaged in security consulting.
Recently, he has worked as a security consultant for the financial sector, examining rules related to security governance.