This article was originally published by Belle Carter at Natural News.
-
- Acting CISA Director Madhu Gottumukkala triggered a security review after uploading “For Official Use Only” government documents into a public version of ChatGPT in August 2025, risking exposure to OpenAI’s user base.
-
- The incident fuels existing concerns that CISA—originally tasked with infrastructure security—has become a tool for censorship, targeting conservative voices and election-integrity advocates under the guise of combating “misinformation.”
-
- Gottumukkala reportedly failed a counterintelligence polygraph test in July 2025 and attempted to remove CISA’s Chief Information Officer, raising internal tensions. His unauthorized ChatGPT use further damaged trust in his leadership.
-
- The breach highlights vulnerabilities in adopting public AI tools like ChatGPT for sensitive government work, contrasting with secure internal alternatives (e.g., DHSChat). Critics warn of oversight gaps and partisan misuse.
-
- The incident intensifies scrutiny of CISA’s dual role in cybersecurity and speech policing, eroding public confidence. Key unresolved questions include disciplinary actions for Gottumukkala and whether CISA can reconcile its conflicting mandates.
The acting director of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, triggered an internal security review after uploading sensitive government documents into a public version of ChatGPT last summer, according to a Politico investigation.
The incident, which occurred in August 2025, raised alarms within the Department of Homeland Security (DHS) because the uploaded material—marked “For Official Use Only”—could have been exposed to OpenAI’s vast user base.
The revelation comes amid heightened scrutiny of CISA’s role in cybersecurity and allegations that the agency has been weaponized to suppress dissenting voices under the guise of combating disinformation. Critics argue that CISA, originally tasked with protecting critical infrastructure, has increasingly served as a censorship arm of the federal government, targeting conservative viewpoints and election-integrity advocates.
Gottumukkala, who assumed his interim role in May 2025, reportedly requested special permission to access ChatGPT—a tool otherwise blocked for DHS employees—before uploading contracting documents. Cybersecurity sensors flagged multiple uploads in early August, prompting a DHS-led damage assessment.
CISA under scrutiny following breach
While the files were not classified, their exposure to OpenAI’s platform—which retains user inputs to refine its responses—raised concerns about unintended disclosures. Unlike approved DHS AI tools, which are configured to keep data within federal networks, public ChatGPT interactions risk leaking sensitive information.
In a statement, CISA’s Director of Public Affairs, Marci McCarthy, defended Gottumukkala’s actions, stating his access was “short-term and limited” with DHS controls in place. However, officials familiar with the incident told Politico that Gottumukkala “forced CISA’s hand into making them give him ChatGPT, and then he abused it.”
This incident adds to the growing controversies surrounding Gottumukkala’s leadership. Last July, he reportedly failed a counterintelligence polygraph test—a requirement for accessing highly sensitive intelligence—though he denied the characterization during congressional testimony. Additionally, internal tensions flared when Gottumukkala attempted to remove CISA’s Chief Information Officer, Robert Costello, before political appointees intervened.
Critics argue that CISA’s expanding mission—which, according to BrightU.AI‘s Enoch, includes securing infrastructure and policing online speech—has blurred its mandate. Under both the Trump and Biden administrations, the agency has faced accusations of overreach, particularly in its partnerships with social media platforms to flag so-called “misinformation.”
Broader implications for AI and government transparency
The incident underscores the risks of integrating AI tools into government operations without stringent safeguards. While the Trump administration has championed AI adoption to maintain U.S. competitiveness—particularly against China—this breach highlights potential vulnerabilities.
DHS-approved AI tools, such as its internal DHSChat, are designed to prevent data leaks. However, Gottumukkala’s use of a public-facing AI model raises questions about oversight and accountability.
The controversy also fuels concerns about CISA’s credibility as it continues to influence election security narratives. Skeptics warn that agencies like CISA could further erode public trust by conflating cybersecurity with partisan censorship efforts.
The ChatGPT leak incident exposes deeper tensions within CISA—between its cybersecurity mission and its controversial role in information control. As investigations continue, the episode serves as a cautionary tale about the risks of unchecked AI adoption in government and the need for transparency in agencies tasked with protecting both infrastructure and democratic discourse.
For now, the fallout leaves lingering questions: What did the DHS review conclude? Will Gottumukkala face disciplinary action? And how will CISA reconcile its dual roles as a cyber defender and a perceived arbiter of online speech? The answers may determine whether the agency can restore public confidence—or if it will remain mired in controversy.
Watch the video below that talks about the dangers of ChatGPT.
This video is from the TRUTH will set you FREE channel on Brighteon.com.
Read the full article here






Leave a Reply