NITDA Raises Alarm Over Chatgpt Security Flaws Enabling Data-Leakage Attacks

The National Information Technology Development Agency (NITDA) has issued a critical cybersecurity advisory alerting Nigerians to newly discovered vulnerabilities in ChatGPT that could expose users to data leakage and targeted cyberattacks.

The advisory, released through the agency’s Computer Emergency Readiness and Response Team (CERRT.NG), comes amid growing reliance on AI systems for business operations, academic research, and government processes.

According to NITDA, cybersecurity researchers have uncovered seven security flaws affecting the GPT-4o and GPT-5 models, enabling cybercriminals to manipulate ChatGPT using indirect prompt injection techniques.

The agency explained that attackers can plant hidden instructions inside webpages, comment sections, or URLs—commands which ChatGPT may unknowingly execute when users browse, summarise, or interact with such content.

“By embedding hidden instructions in webpages, comments, or crafted URLs, attackers can cause ChatGPT to execute unintended commands simply through normal browsing, summarisation, or search actions,” the advisory noted.

The agency added that: Some vulnerabilities allow malicious prompts to bypass safety controls when disguised under trusted domains.

Others exploit markdown rendering weaknesses, enabling embedded harmful instructions to slip through undetected.

In more severe cases, attackers may poison ChatGPT’s memory—causing the system to retain harmful instructions that could influence future responses.

While OpenAI has addressed parts of the issue, NITDA warned that large language models still struggle to fully distinguish legitimate user prompts from embedded malicious data.

Potential Risks to Users

If exploited, these vulnerabilities could expose individuals and organisations to:

Data leakage

Unauthorised command execution

Manipulated responses and misinformation

Compromised enterprise workflows

Long-term conversational manipulation through memory poisoning

NITDA’s Safety Recommendations

The agency advised Nigerians, businesses, and government bodies to adopt strict safety measures when using AI systems. These include:

Limiting or disabling browsing and summarisation features, especially when interacting with untrusted websites.

Enabling browsing and memory features only when absolutely necessary.

Ensuring that deployed GPT-4o and GPT-5 models are regularly updated to patch known vulnerabilities.

This advisory follows a similar warning issued by NITDA earlier in the year concerning a major security flaw found in embedded SIM (eSIM) technology. The vulnerability—linked to the GSMA TS 48 Generic Test Profile (version 6.0 and earlier)—placed over two billion devices at risk of SIM-level attacks, including malicious applet installation, cryptographic key theft, message interception, and covert device control.

NITDA reiterated its commitment to safeguarding Nigeria’s digital ecosystem and urged users to remain vigilant as cyberthreats evolve alongside emerging technologies.