Wednesday, June 11, 2025

Mistakes were made

When an AI script written by a Department of Government Efficiency employee came across a contract for internet service, it flagged it as cancelable. Not because it was waste, fraud or abuse — the Department of Veterans Affairs needs internet connectivity after all — but because the model was given unclear and conflicting instructions.

Sahil Lavingia, who wrote the code, told it to cancel, or in his words “munch,” anything that wasn’t “directly supporting patient care.” Unfortunately, neither Lavingia nor the model had the knowledge required to make such determinations.

“I think that mistakes were made,” said Lavingia, who worked at DOGE for nearly two months, in an interview with ProPublica. “I’m sure mistakes were made. Mistakes are always made.”

  Propublica
Not that HE made them, of course.
[T]he code relied on older, general-purpose models not suited for the task; the model hallucinated contract amounts, deciding around 1,100 of the agreements were each worth $34 million when they were sometimes worth thousands; and the AI did not analyze the entire text of contracts. Most experts said that, in addition to the technical issues, using off-the-shelf AI models for the task — with little context on how the VA works — should have been a nonstarter.

Lavingia, a software engineer enlisted by DOGE, acknowledged there were flaws in what he created and blamed, in part, a lack of time and proper tools.
Continue reading if you're interested in what was wrong with Lavingia's code.

No comments: