NITDA WARNS ABOUT POSSIBLE FLAWS IN CHATGPT’s NEW VERSIOS

Abuja, Nigeria — February 3, 2026

The National Information Technology Development Agency (NITDA) has issued a public warning to Nigerian internet users, developers, and professionals who rely heavily on artificial intelligence tools, cautioning that serious vulnerabilities have been identified in advanced AI models, including GPT-4 and GPT-5.
According to the Agency, these weaknesses could be exploited by cybercriminals to manipulate AI-generated outputs and, in some cases, gain unauthorized access to users’ sensitive data. NITDA stressed that the growing adoption of AI across sectors such as finance, education, media, and public services makes these risks particularly concerning.
In an official statement, NITDA outlined seven critical security flaws discovered in large language models.

One major concern is the ability for attackers to embed hidden malicious instructions inside seemingly harmless web content. These instructions can be concealed in social media comments, blog posts, or shortened links, and may be unknowingly executed by AI systems during routine activities such as summarizing text, analyzing documents, or browsing the internet.

The Agency also highlighted other dangerous exploits, including techniques that allow attackers to bypass built-in safety filters, hide harmful commands using markdown and formatting bugs, and engage in memory poisoning—a method that gradually alters an AI model’s behavior over time. Such manipulation could eventually result in data leaks, misinformation, or unauthorized actions carried out through AI-assisted systems.

While OpenAI has acknowledged some of these issues and claims to have patched several vulnerabilities, NITDA noted that large language models still face significant challenges in detecting cleverly disguised or context-aware malicious instructions.
As a precaution, NITDA advised Nigerian cyberusers and AI professionals to exercise heightened caution when using AI tools. The Agency urged users to verify AI-generated outputs, avoid blindly trusting automated responses, and remain alert to suspicious or unfamiliar online content.

NITDA reaffirmed its commitment to promoting safe and responsible use of emerging technologies and encouraged organizations to strengthen their cybersecurity practices as AI adoption continues to expand nationwide.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top