Home Science Phantom science

Phantom science

13
0

There’s a new specter haunting environmental governance, and it doesn’t rattle ghostly chains, it’s its generating phantom science.

Recently, I was reading a government report trying to find scientific justifications for environmental actions when I ran into some citations that looked interesting. So, I tried to look them up. Despite a full, official looking citation in the reference list, with link, none of them existed. The links typically did work but took me to a completely unrelated, but real scientific paper. Having been in academia, I was used to scientists inserting dubious, unrelated citations of their own work in order to get their citation rate up. But this was completely different. So, I tried using google scholar and then google AI to find papers on that topic. While there was nothing on google scholar, Google AI was showing me citation and links that looked very similar to the phantom one that I had found… which also didn’t exist. After pointing out the fake to google AI I asked for a real citation. Up came yet another phantom citation. I repeated this three times more times and each time it was a phantom citation/paper. Going back to the Government report (upon which a major project was based) it looked like it was riddled with phantom citations, all providing fake support and backing up what the project developers wanted (and “proving” there would be no environmental damage). It looked like the whole report had been written with AI.

Across academia, law, and now government, generative AI systems are quietly reshaping how reports are written. They promise speed, efficiency, and cost savings. But they also come with a well-documented flaw: they make things up. Not in the obvious, sloppy way of a student padding a bibliography 5 minutes before a submission deadline, but in a far more insidious fashion, by producing polished, plausible but entirely fictional scientific references. Increasingly, those phantom citations are haunting official documents.

The rise of the phantom menace

We’ve moved well beyond hypothetical concerns. AI hallucinations (confidently presented false information) are now empirically documented across multiple domains.

This is not a fringe academic issue. It’s now systemic. What’s more is it gets worse when AI is asked to support a pre-determined conclusion. It’s been noted that models are especially likely to invent sources when prompted to support a specific point. Sound familiar?

When cheating on a homework essay becomes a policy crisis

If this were confined to student essays, sloppy conference presentations or papers in low tier journals, it would be embarrassing. But it isn’t. There are now documented cases of AI-generated hallucinations appearing in Government and reports and key policy documents:

These are not harmless typos. These are structural failures in evidence-based policymaking.

Because once a fabricated citation enters an official report, it gains legitimacy. It gets cited again. It enters the grey literature. It becomes “fact” by repetition.

This is how scientific understanding erodes—not with a bang, but with a bibliography.

“So tell me what you want, what you really, really want.” The Spice Girls

Let’s be blunt: this isn’t just about technology. It’s about incentives. Government agencies are under pressure to:

justify predetermined policy positions;

produce reports quickly; and

to do so with shrinking budgets and staff.

Generative AI is perfectly suited to this environment. Not because it finds truth, but because it produces convincing narratives on demand. Ask it for the state of the science, and you might get something reasonable. Ask it to support a conclusion, and you will almost certainly get something compliant. AI doesn’t “lie” in the human sense. It optimizes for plausibility. What is more, in the current political climate, a plausible lie is often more useful than facts. Therin lies the danger…

“You want the truth? You can’t handle the truth!” A Few Good Men

If this feels abstract, consider what’s happening in the legal system.

There are now hundreds of documented cases of lawyers submitting filings containing entirely fabricated case law generated by AI. Courts have issued sanctions, fines, and public reprimands. Judges have been clear: submitting hallucinated citations is not a technical glitch, but rather it’s professional misconduct. Now translate that standard to environmental governance. What happens when:

an environmental impact assessment cites nonexistent studies?

a fisheries management plan relies on fabricated population data? or

a climate risk report includes invented supporting literature?

At that point, we are no longer dealing with bad science. We are dealing with legally actionable failure.

“You shall not pass!” Lord of the Rings

Environmental NGOs, political watchdog groups, and investigative journalists should take this both seriously and strategically. They need to get serious about hunting down AI generated government reports and policy document and blocking decisions based on these in the courts.

“The 600 series had rubber skin. We spotted them easy. But these are new… they look human.” The Terminator

Here are some suggestions for NGOs to test whether government reports and policy documents or other scientific documents are bona fide or “body snatchers”.

Check the references

Take major agency reports and randomly sample citations. Verify DOIs, authors and even the existence of the cited journal. You don’t need AI expertise, you just need patience and Google Scholar.

Look for tell-tale AI signatures

There is AI detection software, but this often hinges on whether report writers actually know grammar. AI detection software is often triggered by consistent correct usage and the assumption that real people do not know the difference between (or use) an em-dash and an en-dash (both of which AI loves), or the Oxford comma. Try checking for repeated citation structures or identical phrasing across sections. In particular look for references that almost (but don’t quite) exist. These are well-documented artifacts of AI generated text.

Use the Freedom of Information Act (FOIA)

Request information on a document’s drafting processes, internal communications about report preparation and the agency’s AI usage policies. If AI was used without disclosure or verification protocols, that matters.

Use the law

By 2027, expect this to hit the courts in a meaningful way. Potential legal angles for environmental NGOs include:

Administrative Procedure Act (APA) challenges (i.e., decisions based on flawed evidence);

Freedom of Information violations (i.e., failure to disclose methodology); and

Scientific integrity policies (many agencies have them but few enforce them).

If a report underpinning a regulatory decision contains fabricated evidence, that decision becomes vulnerable.

The uncomfortable truth is out there

We are at the early stage of a credibility crisis. Right now, AI hallucinations are treated as quirks or bugs to be ironed out. But the evidence suggests they are a structural feature, not a temporary glitch. When those hallucinations enter the scientific record, the legal system and the policy process they stop being technical issues and become governance failures. The uncomfortable truth is we are already making environmental decisions based, in part, on things that do not exist!

“Nobody trusts anybody now… and we’re all very tired.” The Thing

None of this means AI has no place in science or policy. It can summarize, translate, and assist. But it cannot be treated as a source of truth. Because it isn’t one. Until agencies build robust verification pipelines (and until there are consequences for failing to use them) the burden will fall on NGOs, journalists and on scientists willing to check the references and footnotes.

The next environmental lawsuit might not hinge on the presence of a threatened species or a habitat model. It might hinge on cited science that was never real in the first place.