Wednesday, March 18, 2026

Democrats demand a response to Doge AI

Share

Democrats on The House Committee released two dozens of applications on Wednesday morning, pressing the leaders of the Federal Agency in order to obtain information on plans to install AI software in federal agencies in connection with the constant cuts of the government’s work force.

The dam is followed by the last report by Wired i The Washington Post As for the efforts of the so -called government efficiency department Elon Musk (Doge) in order to automate tasks with various reserved AI tools and data sensitive to access.

“The American nation entrusts the federal government confidential personal data related to their health, finances and other biographical information on the basis that this information will not be disclosed or improperly used without their consent”, “we read in the demands,” including by using an unplanned and impossible to describe AI software. “

The conclusions, first obtained by Wired, were signed by Gerald Connolly, a democratic congressman of Virginia.

The main purpose of the conclusions is to press the agency to demonstrate that all potential use of artificial intelligence is legal and steps are taken to protect private Americans. Democrats also want to know if any use of artificial intelligence will financially benefit Musk, which was founded by Xai and whose restless car company, Tesla, is working on turning towards robotics and artificial intelligence. Democrats are still concerned, says Connolly that Musk can use his access to confidential government data for personal enrichment, using data to “pays” his own AI model, known as Grok.

In the demands, Connolly notes that federal agencies “are related to many statutory requirements in the exploit of AI software”, indicating mainly the Federal Risk Management and Authorization Program, which works to standardize the government approach to cloud services and to make sure that AI tools are properly assessed in terms of security risk. It also points to the Advancing American Ai Act, which requires Federal agencies “prepare and maintain an inventory of artificial intelligence cases in the agency”, as well as “provide reserves of agency in public.”

Documents obtained by Wired last week show that Doge agents arranged a reserved chatbot called GSAI to about 1,500 federal employees. GSA supervises the real estate of the federal government and provides IT services for many agencies.

The note obtained by wired reporters shows that employees have been warned about software feeding all controlled unplassified information. Other agencies, including departments of the treasury, health and social welfare, considered Chatbot, although not necessarily GSAI, in accordance with the documents viewed by Wired.

Wired also informed that the United States Army is currently using the software called CamogPT to scan its record systems in terms of references to diversity, equality, inclusion and availability. Army spokesman confirmed the existence of a tool, but refused to provide further information about how the army plans to use it.

Connolly writes that the Education Department has personal information about over 43 million people associated with federal student aid programs. “Due to the unclear and crazy pace at which the dog seems to work,” he writes, “I am deeply concerned that students, parents, spouses, family members and all other lenders are served by secret members of the Doge team for a vague purpose and without security to prevent revelation or improper, improper exploit.” The Washington Post previously reported This dog began to feed sensitive federal data taken from record systems in the Education Department to analyze its expenses.

Education Secretary Linda McMahon said on Tuesday that she continues plans to dismiss over a thousand employees in the section, joining hundreds of others who accepted the “buyout” of Doge last month. The Education Department lost almost half of the labor force – the first step, McMahon saysin full abolition of the agency.

“The exploit of artificial intelligence for the assessment of sensitive data is full of stern threats going beyond improper disclosure,” writes Connolly, warning that “the data and parameters selected for the analysis can be defective, can be introduced by the AI ​​software design, and the staff may misinterpret AI recommendations among other fears.”

He adds: “without a clear purpose of using artificial intelligence, handrails to ensure adequate data support and adequate supervision and transparency, the exploit of AI is perilous and potentially violates federal law.”

Latest Posts

More News