In Tromsø, a public report recommending the closure of schools was created using ChatGPT — and no one bothered to verify the sources.
Out of 18 references, 11 were completely fake. One even quoted a book that doesn’t exist, attributed to a real Norwegian professor who was understandably furious.
Let’s be clear: this isn’t a failure of technology or education.
This is a failure of basic professional responsibility.
The Norwegin Minister Karianne Tung’s response — applauding Tromsø for trying AI — completely misses the point. You don’t get a pat on the back for experimenting with tools without oversight, especially when it leads to misinformation in official documents. That’s not innovation — it’s incompetence.
If you work in government, handling decisions that affect children’s futures, you are expected to know the difference between real research and auto-generated nonsense.
You must check your sources. If you don’t, you’re not doing your job. In any serious organization, not doing your job has consequences.
And you can’t even blame yourself for not getting guidelines on how to use Chat GPT—it is written clearly in the interface that you must check the sources.
We can’t regulate our way out of stupidity.
We can build safer systems, yes. However, no system replaces the need for good judgment and accountability.
This case should be a wake-up call:
AI isn’t dangerous — people using it irresponsibly are.
And yes, this article is checked by a human.