Information leak? Pitfalls of using generative AI in companies
In recent years, many companies have adopted generative AI to enhance operational efficiency and productivity. Its effectiveness has been recognized in areas such as document creation, information organization, and customer support, and its applications continue to expand rapidly. However, as the use of generative AI grows, new security risks have emerged - such as the accidental input of confidential information and unintentional data leakage. Given these challenges, it is essential for organizations to understand these risks and establish clear countermeasures to ensure the safe and appropriate use of generative AI.
This article outlines the key security risks companies face when using generative AI and presents fundamental strategies to mitigate them.
1. Challenges of Using Generative AI Internally
1.1. The Convenience of Generative AI and Information Security Risks
Generative AI is being widely adopted by companies and is expected to significantly improve productivity. However, its growing use has also introduced new information security risks (*1). In the past, for example, a major electronics manufacturer experienced an internal information leak when employees inadvertently entered confidential data into ChatGPT (*2).
Such incidents highlight how the convenience of generative AI can lead to serious security breaches that affect both corporate credibility and business performance.
As a result, some companies have chosen to completely restrict the use of generative AI. According to a global survey on enterprise AI usage conducted by NTT DATA (*1):
- 95% of executives stated that generative AI will have a major impact on improving productivity.
- 89% expressed concern about the security risks associated with adopting generative AI.
This underscores the responsibility of corporate CISOs and security professionals to accurately evaluate the balance between generative AI's benefits and risks, and to implement appropriate measures tailored to their organization's circumstances.
1.2. Purpose of This Article
This article focuses on the information security risks that arise from the use of generative AI and summarizes the measures companies should consider. We first present the typical risks associated with generative AI. We then explain both human centric and technical countermeasures that organizations can implement. Finally, we discuss how to strike a balance between convenience and security and introduce the role NTT DATA plays as a trusted partner in helping companies safely adopt generative AI.
-
(*1)
NTT DATA, Global GenAI Report
https://services.global.ntt/en-us/campaigns/global-genai-report -
(*2)
Forbes, Samsung Bans ChatGPT Among Employees After Sensitive Code Leak
https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
2. Generative AI Security Risks and What Companies Should Do
2.1. Information Security Risks Associated with Generative AI
Information security is commonly defined by three core principles - confidentiality, integrity, and availability - as established by ISO/IEC (International Organization for Standardization / International Electrotechnical Commission). In addition, ISO/IEC includes four supplementary elements that broaden the definition of security to a total of seven: non-repudiation, accountability, authenticity, and reliability (*3).
In this section, we focus on the factors that have particularly significant practical impact on how enterprises use generative AI. We categorize the major risks associated with generative AI and provide representative examples and diagrams that illustrate how these risks manifest in real-world scenarios.
Figure: Typical security risks faced when using generative AI in companies
2.1.1. Confidentiality: Leakage of Sensitive Information
There is a risk that sensitive internal information may be leaked to external service providers - or even to users inside or outside the organization - when employees input confidential data into generative AI.
In addition to direct user input, sensitive information can also leak unintentionally through system-to-system integrations, such as connectors (e.g., MCP), if proper controls are not in place.
2.1.2. Integrity: Tampering or Loss of Information
Generative AI systems may be exposed to risks such as prompt injection or data poisoning, where an attacker manipulates training data or injects malicious instructions.
These attacks can result in unintended modifications, corruption, or deletion of information within systems that rely on generative AI outputs.
2.1.3. Availability: Business Impact from Service Failures
As generative AI becomes more deeply integrated into business processes, companies may face operational delays or outages due to:
- External service downtime
- API specification changes
- Attacks designed to disable AI services
- Excessive access loads or resource exhaustion
Increasing dependency on AI services means such incidents can have a significant operational and financial impact.
2.1.4. Non-Repudiation: Opacity of Operation History
Some generative AI services do not keep sufficient records of prompts, interactions, or generated outputs.
This lack of logging makes it difficult to determine "who did what and when" in the event of an incident, complicating investigation and accountability.
2.1.5. Accountability: Governance Obscurity
If generative AI usage is handled independently across departments without centralized control, the organization will struggle to understand:
- Who is using AI
- For what purpose
- With what level of risk
Shadow AI - tools adopted without internal approval - further increases governance ambiguity and introduces unmanaged risks.
Comprehensive Perspective
When companies use generative AI, they must consider all seven information-security elements and implement comprehensive measures.
In addition, while this article focuses on security risks, companies should also be aware of other AI-specific risks, including:
- Hallucination (factually incorrect output)
- Non-compliant or unethical content
- Reputational risks when AI-generated content spreads internally or externally
These issues can lead to poor business decisions, reduced trust, or legal consequences.
When necessary, organizations should incorporate these risks into their AI governance measures.
2.2. Human Measures
Addressing information-security risks begins with controlling and guiding human behavior.
Generative AI places significant responsibility on users - for example, deciding what information to input and how to interpret and validate output.
If human measures fail, even strong technical controls may not be effective.
Below are four representative human-centered countermeasures.
2.2.1. Formulation and Enforcement of Usage Guidelines
Organizations should establish clear rules defining:
- What information can or cannot be input
- Acceptable use cases
- How generated content should be used, reviewed, or validated
These rules must be communicated through internal portals, training sessions, and awareness campaigns to ensure consistent compliance.
2.2.2. Clarifying User Scope and Responsibilities
Use of generative AI should not be left to individual discretion.
Instead, organizations should clearly define:
- Who is allowed to use AI
- Under what conditions
- With what authority and responsibilities
This helps prevent unauthorized use and improper handling of information.
2.2.3. Education and Awareness Programs
Generative AI is convenient, but users may trust its output too easily.
If employees do not understand risks such as hallucination or uncertainty in AI-generated content, they may unintentionally send incorrect information externally.
Organizations should therefore:
- Share real failure cases
- Explain the risks in practical terms
- Promote "personal accountability" in usage
Awareness programs help users make safer decisions.
2.2.4. Encouraging Appropriate Usage Choices
Generative AI is not always necessary or optimal for every task.
Employees must be able to judge:
- When AI should be used
- When AI should not be used
- When to seek guidance
To support appropriate decision-making, organizations should establish clear standards and provide accessible consultation channels.
2.3. Technical Measures
Human measures alone cannot eliminate risk.
Because generative AI interacts closely with external services, systematic technical controls play a crucial role in risk mitigation.
Below are four representative technical measures used by companies.
2.3.1. Information Control and Leakage Prevention
By implementing DLP (Data Loss Prevention), organizations can detect and block confidential information before it is entered into generative AI.
Recent advanced DLP solutions can:
- Analyze the meaning of entire prompts
- Identify sensitive content based on context
- Prevent risky input even beyond simple keyword matching
Additionally, applying input/output filtering enables rule based control of:
- Personal information
- Discriminatory expressions
- Inaccurate or unsafe outputs
These controls significantly reduce the risk of information leakage and misuse.
2.3.2. Enhanced Access Control and Usage Restrictions
Tools such as proxies, firewalls, and CASB (Cloud Access Security Broker) can:
- Control access to generative AI services
- Visualize usage patterns
- Enforce restrictions by department or time period
This form of "entry control" is essential not only for generative AI but also for all SaaS services, forming an important part of enterprise AI governance.
2.3.3. Development of an Internal Generative AI Environment
For organizations with higher security requirements, using a dedicated generative AI environment - built internally or on a closed network - is highly effective.
By leveraging on-premises or private-cloud AI infrastructure, companies can:
- Prevent business data from being sent to external services
- Centrally manage logs and access
- Customize features according to internal security policies
Such environments are suitable for handling sensitive information or mission-critical operations.
2.3.4. Identification and Traceability of Outputs
To prevent confusion between AI-generated content and existing internal documents, organizations can:
- Automatically attach identifiers indicating content was AI-generated
- Monitor and store logs of prompts, timestamps, and users
- Ensure outputs are traceable for audits or investigations
This enhances accountability and helps prevent recurrence when errors occur.
-
(*3)
ISO, ISO/IEC 27000:2018
https://www.iso.org/standard/73906.html
3. Conclusion
3.1. Security Measures in the Use of Generative AI
Security measures are a fundamental prerequisite for the safe use of generative AI. Rule development, access control, log management, and other governance mechanisms should not be viewed as restrictions that hinder usage, but rather as investments that enable organizations to adopt generative AI with confidence.
Combining both human measures (education, guidelines, governance) and technical controls (DLP, access restrictions, logging, dedicated AI environments) is essential to ensure that all employees share a consistent understanding of security.
However, implementing every possible countermeasure is not always optimal. There is an inherent trade-off between convenience and safety, and organizations must determine the appropriate balance based on their goals, operations, and risk tolerance. The key is to design a framework that supports safe usage without sacrificing the innovation and productivity benefits of generative AI.
3.2. Support from NTT DATA
NTT DATA has long provided end-to-end security services across the full lifecycle, from consulting and design to implementation and ongoing operations (*4).
Building on this experience, and drawing from extensive knowledge of generative AI usage and countermeasures both inside and outside the company, NTT DATA offers comprehensive support for safe AI adoption. This includes:
- Formulating usage guidelines
- Designing governance structures
- Selecting appropriate technologies
- Supporting implementation and operational management
The risks discussed in this article represent common challenges associated with generative AI, but they do not apply uniformly to all companies.
The degree of risk varies significantly depending on industry, business processes, organizational culture, and operational maturity.
NTT DATA begins by thoroughly understanding these differences and works collaboratively with each customer to design the optimal measures tailored to their unique operational environment and risk profile.
NTT DATA will continue to evolve alongside its customers, helping build systems capable of responding flexibly to new challenges as the landscape of generative AI continues to change.
-
(*4)
NTT DATA, Cybersecurity
https://www.nttdata.com/global/en/services/cybersecurity
Yu Ichinose
Global Business Unit, Security & Network Division, Solutions Sector, NTT DATA Japan
Joined the NTT DATA Group in 2024. In the Data Security domain, he has been involved in developing ransomware-countermeasure assets and contributing to knowledge sharing among global units. He is currently working on developing security service assets to address emerging risks surrounding generative AI.
Yusuke Miura
Global Business Unit, Security & Network Division, Solutions Sector, NTT DATA Japan
Specializes in developing new cybersecurity services using advanced technologies and works on identifying startups that can become next-generation partners, as well as co-creating partner businesses globally. Recently, has been focusing on developing security service assets related to generative AI.