Why we don’t use Grok — and why we never will

Grok gets pitched as another AI tool you “should probably try.” We won’t. Not now, not later. And we’re not alone — in architecture, where credibility and care matter, Grok and Grokopedia are already showing exactly why they’re a dead end.

1) Grokopedia: sloppy, deceptive, biased

Parlour’s Gill Matthewson pulled apart Grokopedia’s entry on Parlour and found it riddled with problems that aren’t neutral “mistakes” — they’re distortions that push an agenda. She shows how Grok flips basic meaning (turning her scrutiny of dodgy stats into an implication that Parlour’s data is what’s being scrutinised), and how it uses footnotes to prop up claims the sources don’t actually support.

More importantly, Grok leans into the tired narrative that gender inequity is mostly about “choice”, “meritocracy” and even “biology-driven factors”, while downplaying the evidence and systemic arguments Parlour exists to prosecute: structural barriers, embedded workplace bias, and the gendered division of labour that still impact architectural workplaces. That’s not harmless AI slop — it’s misinformation with a bland tone and an “encyclopedia” label.

2) No accountability, no recourse

Jeremy Till — a prominent UK architecture academic and author — publicly described asking Grokopedia to remove an entry about him because he didn’t want to be associated with Grok/X/Musk. The reported response wasn’t a normal correction pathway. It was effectively: your request is “malicious”; removal is “vandalism”.

That should concern anyone in practice. If a living person can’t get an AI-generated “encyclopedia” entry about themselves removed, what chance does everyone else have when they’re misrepresented? There’s no meaningful accountability — just the platform deciding it owns the narrative.

3) SRHD uses AI — just not that AI

At Spec Rep Help Desk, we absolutely use software from large international companies, including multiple AI providers, to deliver our service. We do that deliberately, because the right tool depends on the job.

One model might be excellent at drafting or restructuring text. Another might be better at classification, search, summarising long material, or handling highly technical domains. No single LLM is “best” at everything — and pretending otherwise is how people end up locked into tools that don’t actually fit their workflow.

By blending systems, we can optimise for quality, reliability, and speed — and we’re not captive to any one platform’s ideology, incentives, or failure modes. That flexibility is part of professional responsibility.

4) Values matter: Grok is Musk — and that’s a non-starter

This isn’t just about bad outputs. It’s about what the product is for, and whose worldview it reflects. Elon Musk’s values are inseparable from Grok, and his political role — including DOGE and the broader undermining of democratic norms — makes the entire platform unacceptable to us. We’re not bringing that ecosystem into architectural practice.

The bottom line

We don’t need Grok. We don’t want Grok. And from what we’re seeing, the architecture community doesn’t either. There are plenty of AI tools that can support practice without importing bias, misrepresentation, and corrosive governance. Grok isn’t one of them — and it never will be.

Previous
Previous

Spec Rep Help Desk V1 Launch

Next
Next

The Victorian Schools AI Design Assistant is live — and it’s built for the documentation phase