Key points of security risk countermeasures in the use of generative AI

With the rapid adoption of generative AI, many companies are exploring how to integrate it into their operations. At the same time, new risks have emerged-such as unintentional information leakage through AI prompts and business disruptions caused by AI generated misinformation. Because these risks span a wide range of areas, it is essential for organizations to understand both the threats and the appropriate countermeasures to ensure the safe use of generative AI.
This paper explains the specific risks faced by users and presents effective mitigation strategies from both technical (system) and governance perspectives.

1. What Are the Risks of Using Generative AI?

Generative AI technology is rapidly evolving, expanding its use cases across many industries. However, as adoption increases, challenges and risks have also grown. To use generative AI safely, companies must first understand what risks exist and how to address them.
Two major risk categories often cited are:

(1) Leakage of Input Information

Sensitive data-including personal information, confidential business data, or trade secrets-may be unintentionally exposed if entered into generative AI.
Such information can be:

  • retained as part of the AI model's training data,
  • accessed by the service provider, or
  • accidentally surfaced in responses to other users.

This may lead to leakage of information that should remain internal.

(2) Inaccuracy of AI Output

Generative AI does not guarantee the accuracy or correctness of its output.
If organizations rely on this information without proper verification, incorrect responses or inaccurate external communications may occur. These can result in:

  • Operational mistakes
  • Spread of misinformation
  • Damage to corporate reputation and trust

These risks apply to all users-individuals and companies alike-making appropriate countermeasures essential.

2. How to Use Generative AI Safely

To select appropriate countermeasures, organizations must first recognize the types of risks and understand the wide variety of available safeguards.
For example, to combat input information leakage:

  • A system can be implemented to check all data before it is sent to generative AI.
  • If personal or confidential information is detected, the system can automatically mask or delete the data.

However, such checks themselves may contain vulnerabilities. Attackers can attempt to bypass detection, leading to a potential leak even when safeguards exist.
Therefore, organizations should also implement:

  • mechanisms to detect vulnerabilities in the filtering process, and
  • an operational framework that allows security patches to be applied promptly.

By combining:

  • governance measures (rules, guidelines, approval processes), and
  • technical/system measures (filters, access control, monitoring),

organizations can significantly reduce risk. In short, safe usage requires understanding both categories of controls and applying them appropriately.

3. Key Points of Risk Countermeasures

Below, we examine countermeasures related to the two risks discussed in Section 1: Risks Hidden in the Use of Generative AI, considering both governance and system perspectives.

(1) Measures to Prevent Information Leakage from Generative AI

Information leakage from input data is one of the most serious concerns for organizations.
Leakage of customer information or internal confidential data can result in significant business damage.
Such leakage can arise from many causes, but they generally fall into two broad categories:

  • Planned or intentional leakage (malicious actions)
  • Accidental leakage caused by insufficient security controls or user mistakes

Additionally, leakage sources can be categorized as:

  • Internal (employees, contractors, internal tools), or
  • External (service providers, system integrations, malicious actors)

Understanding these distinctions allows companies to tailor countermeasures and govern their use of generative AI more effectively.

Here are some examples of risks and countermeasures for each combination:

Figure: Information leakage risks from generative AI and countermeasures (example)

Systematic measure Data Filtering It is a technology that monitors data fed into generative AI in real time and masks or replaces sensitive information if it contains it. This reduces the risk of inappropriate information leaking to the outside world through generative AI.
Opt-out request This is a mechanism that allows users to opt out of the information contained in the prompts entered by users so that they are not learned by generative AI. This reduces the risk of important information in the prompt being output to a stranger when using the same generative AI.
Governance measure Formulation and enforcement of policies and regulation Develop a unified policy across the enterprise, clearly defining guidelines for how generative AI is used and how to handle data. By formulating rules and ensuring they are thoroughly implemented by employees, the risk of information leakage is reduced.
Choosing the Right Generative AI Vendor When deciding which generative AI to use, we use generative AI provided by vendors that have comprehensive governance security measures, such as conducting continuous third-party audits. This reduces the risk of unauthorized access, cyberattacks, and more.

Table 1: Information leakage risks and countermeasures from generative AI (examples)

(2) Measures to Address Inaccuracies in Output Information

When using generative AI, you always want to get high-quality output. Inaccuracies in output information can damage a company's credibility and negatively impact the business. In principle, users of generative AI are responsible for the output content, but there are the following risk countermeasures.

Systematic measure Selection of generative AI that is updated regularly Utilizing generative AI, which is regularly updated and always up to date, can improve the accuracy and performance of algorithms, reducing the risk of outputting inaccurate information.
Selecting generative AI with the right capabilities Choose generative AI that is equipped with technologies such as cross-delivery and confidence scoring to ensure the quality of generative AI output. This reduces the risk of outputting inaccurate information or taking it for granted.
Governance measures Guideline Establish guidelines for the use of generative AI, specifying specific usage methods and precautions. These guidelines help employees understand and utilize generative AI's capabilities and limitations appropriately.
Employee Education We regularly provide training on the correct use of generative AI and introduce examples of problems when using generative AI. This will raise the level of information literacy of each employee and reduce the risk of taking the answers of generative AI at face value and sending them directly to the outside of the company.

Table 2: Examples of countermeasures against inaccuracies in output information

4. Summary

Generative AI is a technology that has penetrated the world so much that there is not a day that goes by that it goes unheard, and its use is expected to continue to advance in the future, and it can be said that the trend is irreversible. On the other hand, various problems have arisen over the use of generative AI. Against this background, when using generative AI, it is necessary to face the risks and take countermeasures.
In this article, we explained how to counteract the two main risks of using generative AI. However, it is not the end of the measures mentioned in this article, and it is necessary to consider whether there are risks that cannot be covered by those measures or whether there are any excessive measures. To achieve this, it is necessary to organize the risks surrounding generative AI, determine the risk level according to the degree of impact and likelihood of occurrence of each risk, and take appropriate measures according to that risk level.
Generative AI has become so pervasive that it is now part of daily business conversations, and its use will continue to expand in the future. This trend is effectively irreversible. At the same time, various challenges and risks have emerged as generative AI becomes more deeply integrated into corporate operations.
Given this context, organizations must acknowledge these risks and implement appropriate countermeasures when using generative AI.
In this article, we explained countermeasures for two major risks associated with generative AI. However, the measures introduced here are not exhaustive. Organizations must continuously evaluate whether there are risks not covered by these measures or areas where controls may be excessive. To achieve this, companies should:

  • Identify and categorize the risks surrounding generative AI
  • Assess each risk based on its potential impact and likelihood
  • Prioritize and implement countermeasures according to risk level

Generative AI is still a relatively new technology, and its evolution will continue to accelerate. It is expected that generative AI will increasingly integrate with a wide range of tools and information systems, further expanding its influence and risk surface. Because risks and mitigation strategies are updated daily, organizations must remain vigilant, regularly monitor AI security trends, and continuously evaluate and improve their risk mitigation approaches-even after implementing initial controls.
NTT DATA provides comprehensive support for generative AI security, including research and development, consulting, solution design, and implementation assistance. If you have any concerns or would like to strengthen your organization's approach to AI security, please feel free to contact us.

Toshiki Sakai

Toshiki Sakai

Assistant Manager, NTT DATA, Technology Consulting Sector, Technology Consulting Division

Joined in 2023. As a security consultant, supports security evaluations of the diverse tools and services that customers are considering adopting.