NITDA Warns Nigerians Against ChatGPT Vulnerabilities That Enable Data-Leakage Attacks

NITDA has issued a new warning to Nigerians over fresh vulnerabilities discovered in ChatGPT that could expose sensitive personal and business data. Here’s what users need to know and how to stay protected.

NITDA Warns Nigerians Against ChatGPT Vulnerabilities That Enable Data-Leakage Attacks

The National Information Technology Development Agency (NITDA) has issued a fresh warning to Nigerians over new vulnerabilities discovered in ChatGPT, raising concerns about how personal and confidential data could be exposed when using the AI tool. The advisory, highlighted in reports by Nairametrics, comes at a time when the platform is being used daily for content creation, business automation, customer service, and research across the country.

According to NITDA, the vulnerabilities make it possible for attackers to exploit weaknesses in AI systems and gain access to sensitive user information. The agency explained that cybercriminals are now targeting AI platforms more aggressively because of their rising popularity. As reported by PUNCH, these emerging threats include unauthorized data retrieval, prompt-based extraction attacks, and loopholes that allow malicious actors to reconstruct information users previously entered into the chatbot.

NITDA warned that many Nigerians unknowingly expose themselves by submitting personal names, business documents, account details, and even corporate files into ChatGPT without understanding the security implications. While the tool is designed to assist users, it is still vulnerable to exploitation when not used carefully.

The agency stressed that some of the recently detected vulnerabilities allow attackers to override safety restrictions, manipulate chat flows, or access cached segments of earlier conversations. This means information users believe is deleted or private may still be recoverable under certain malicious conditions. In a landscape where cybercrime is increasing, such risks pose a significant threat to personal data and business operations.

To mitigate these dangers, NITDA advised Nigerians to practice strict digital hygiene. Users are urged to avoid entering sensitive information such as passwords, ID numbers, bank details, or confidential business records when interacting with AI tools. Instead, information should be anonymized or generalized whenever possible. The agency also encouraged organizations to develop internal AI-usage policies to prevent accidental exposure of customer data or proprietary assets.

NITDA further emphasized that Nigerian companies integrating AI into customer service, HR, marketing, or internal operations must adopt stronger oversight mechanisms. Many businesses now use ChatGPT without establishing security guardrails, increasing the likelihood of data leakage.

The warning aligns with global trends as regulatory bodies worldwide issue new guidelines to address the security risks posed by rapidly expanding AI adoption. For Nigeria, NITDA believes staying proactive is essential as the nation deepens its digital-economy agenda and becomes more reliant on AI-driven tools.

READ MORE ON HOW Australia Bans Under-16s From Social Media: What This Means for Global Online Safety

Although ChatGPT remains safe for general, everyday tasks, the agency insists that Nigerians must combine convenience with caution. Users are encouraged to remain updated on AI-security practices, follow official advisory notices, and adopt privacy-first habits when using digital platforms.

With AI now influencing communication, business workflows, and decision-making, NITDA said Nigeria must remain vigilant to avoid preventable data breaches and ensure that emerging technologies are used responsibly and safely.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow