Why I choose Claude, and why it should matter to bid professionals
The AI tool you choose is a values statement. Most people haven't thought about it that way.
I hadn't either, until Anthropic did something we don't hear about very often: they walked from a reported $200 million US government contract rather than let their AI be used for mass domestic surveillance and fully autonomous weapons systems. The Pentagon has since designated Anthropic a supply‑chain risk, a national‑security classification that restricts its use in US defense contracts. It's ok though, don't panic. OpenAI has since stepped in.
I'd already switched from ChatGPT to Claude before this happened. I liked that Anthropic had drawn lines in the sand and meant them. That kind of institutional integrity is rarer than it should be. I know this because I've spent 25+ years on the bid side of that table.
The framework is often not the whole story
Here's what bid professionals know that most people don't: the framework is often not the whole story. There are strict probity policies. Conflicts of interest are declared (or not). Inquiry after inquiry - IBAC, ICAC, the ANAO, the NACC referrals sitting in a queue - find the same patterns. Undisclosed relationships. Suppliers chosen before the tender drops. Dummy bids submitted to create the appearance of competition.
Anyone who has worked in this industry long enough has felt the gap between stated values and actual behaviour. While we don't always name it, we know it when we feel it in our bones.
So when a technology company publicly, expensively, demonstrates that their stated values and their actual behaviour are the same thing, that lands differently for those of us in bidding.
Blowing smoke
Most bid professionals aren't waiting for their IT department to catch up. They're using ChatGPT or Claude on a personal device, outside the firewall, in the gaps between what's approved and what actually gets the work done. Meanwhile, many organisations have rolled out Copilot which is functional, well-integrated, and built on the same OpenAI infrastructure that stepped in to fill the Pentagon's contract gap. Nobody seems to be asking whether that mattered. They asked whether it integrated with SharePoint.
There's a Mad Men moment in this. When Sterling Cooper Draper Pryce lost the Lucky Strike account, Don Draper stayed up all night to write a full page ad in the New York Times declaring he'd quit tobacco (as I recall, he was smoking while he wrote it). Turned a crisis into a positioning statement. Lost one client, built a reputation. Which is what Anthropic has done. The UK, Japan and now Australia have all signed formal agreements with the global powerhouse, with memorandums focused on AI safety, risk evaluation and responsible deployment of frontier models in public services.
Getting the packaging wrong is one thing
Anthropic aint perfect - they accidentally leaked their own source code this week, the second time in the last year, which is an embarrassing operational failure for a company that sells trust. There's a difference though between getting your packaging wrong and skewing your principles. One is fixable. The other tends not to be.
Why this matters right now
Trust in AI is falling while usage climbs. A University of Quinnipiac poll published this week found 76% of Americans trust AI-generated information rarely or only sometimes, and that's after a year of increased adoption. The Thales Digital Trust Index 2026, a global study of 15,000 consumers and organisations, found that only 23% trust companies to use AI responsibly, while 77% remain concerned about AI agents acting on their behalf online.
People are using the tools. They just don't trust the companies behind them. That gap is a values problem - the kind of pressure that eventually turns into more reports, inquiries and legislation.
Against that backdrop, choosing an AI vendor based on what they've publicly refused to do is actually due diligence.
Here's a question worth sitting with: do you know whose values you've outsourced your thinking to?
While you don't need to switch tools, you should make the choice consciously, the same way you'd expect a tender process to be run.
And if you ask your AI vendor what they won't do and they don't have an answer, then that's an answer too.