After I wrote about trying to buy a Mac Studio and failing, the replies kept circling the same question.
"If I can't buy the hardware and I don't trust the cloud, what am I supposed to do?"
That question led me somewhere I didn't expect.
I Checked What GPUs You Can Actually Get in Canada
Not the marketing pages. Not the pricing calculators either.
The actual hardware you can spin up today in a Canadian data center.
I went through every major cloud provider with Canadian infrastructure and checked what's real.
Google Cloud Montreal. P4 and T4 GPUs.
Hardware from 2017 and 2018. No H100. No A100. Not even an L4.
Third-party pricing tools show H100 instances and prices for Montreal.
Those instances don't exist. Try to launch one. It won't work.
Google Cloud Toronto. Slightly better. L4 GPUs and limited H100 in an "A3 Edge" config that Google's own documentation restricts to inference only.
You cannot train a model on current-gen hardware in Canada on Google Cloud.
AWS Canada. Partial GPU support. Latest-generation P5 instances with H100s are concentrated in US regions. What's actually available in Canadian zones is unclear — even from their own sales teams.
OVHcloud Beauharnois, Quebec. V100 and V100S. Also 2017 and 2018 vintage. Their website shows H100 pricing, but those machines are in Gravelines, France. Not Canada.
Iowa alone has more GPU variety than all of Canada combined on Google Cloud.
One exception. DigitalOcean's Toronto data center.
H100 at $2.99/hr. L40S at $1.57/hr. Real hardware. Actually available. Limited capacity — Digital Ocean is a smaller provider — but it's there.
That's it.
That's Canada's current-gen AI compute infrastructure.
Residency Is Not Sovereignty
DigitalOcean is a US company.
So is AWS. So is Google. So is Lambda. So is CoreWeave.
Under the US CLOUD Act, American authorities can compel US-headquartered cloud providers to hand over customer data regardless of where it physically sits.
A server rack in Toronto. Running your model. On Canadian soil. Paid for in Canadian dollars.
A US court order can still reach it.
Residency means your data is physically in Canada.
Sovereignty means no foreign legal framework can override your control of it.
For most startups, residency is enough. For healthcare, finance, government, anything touching regulated data — it's not.
The genuinely Canadian-owned GPU providers I could find? Consensus Core running H100s out of Cologix Montreal.
CoEvo running GPUaaS out of Cologix Vancouver. Neither has the name recognition or SLA documentation of the US players.
But they're the closest thing Canada has to actual compute sovereignty.
Three Products People Keep Confusing
When someone says "the cloud is bad for sovereignty," they're usually collapsing three very different things into one complaint.
This confusion costs real money.
First product: hyperscaler AI APIs.
OpenAI, Anthropic, Gemini. You rent the intelligence. Your data leaves your jurisdiction.
You never own the weights. Highest sovereignty cost. Also the fastest way to ship. Most startups start here. That's fine — until it isn't.
Second product: hyperscaler compute.
AWS, GCP, Azure. You rent the hardware and bring your own model. Better sovereignty than renting someone else's model.
Still expensive. Still US-hosted for most current-gen hardware. And you're paying a premium that has nothing to do with compute.
Third product: sovereign bare-metal rental.
DigitalOcean TOR1, Hetzner, Lambda. You rent raw compute in a jurisdiction you choose. Bring your own open-source model.
Nothing proprietary leaves your stack. Three to five times cheaper than hyperscaler.
Each one is a different product with a different risk profile.
Treating them as the same thing is how companies end up overpaying for compute they don't control — or under-investing in sovereignty they actually need.
Why Enterprises Actually Pay AWS
This one took me a while to understand.
Enterprises don't pay AWS $200 billion a year for compute.
They pay AWS for a throat to choke when things break.
When your production system goes down at 2am and your CEO wants to know whose fault it is, "we run on AWS" is an answer. "We self-host on a rented GPU in Toronto" is not. At least not at most companies.
That's accountability insurance. Hyperscalers sell accountability externalization. You're not paying for the server.
You're paying so that when something goes wrong, it's somebody else's problem.
That's a real product. It's worth real money to the right buyer.
Sovereign infrastructure is the opposite trade. You own the stack. You own the uptime.
You own the failure. Cheaper. More controlled. But you have to be the kind of organization willing to own outcomes, not outsource them.
Neither answer is wrong.
But they're very different answers for very different buyers. And most of the "just self-host" content online skips this part entirely.
Canada Has a Strategy. It Doesn't Have Infrastructure.
The federal government committed $2 billion to a Sovereign AI Compute Strategy.
$300 million of that went to the AI Compute Access Fund — grants of $100K to $5M for Canadian businesses.
The fund covers 67% of eligible Canadian cloud costs and 50% of non-Canadian compute costs.
Want the full playbook? I wrote a free 350+ page book on building without VC.
Read the free book·Online, free
Real money. Good intentions.
But the only provider with current-gen GPUs in a Canadian data center is DigitalOcean. A US company.
Google Cloud Montreal is running 2017 hardware. AWS Canada's H100 availability is a question mark.
The government is subsidizing access to US-owned infrastructure and calling it a sovereign compute strategy.
The policy is funding the dependency it's designed to solve.
What France Did Instead
France committed over 109 billion euros under France 2030.
Mistral AI's compute platform alone will run 18,000 Blackwell GPUs in its initial phase.
A joint France-NVIDIA partnership backed by Bpifrance — France's national investment bank — is building Europe's largest AI campus in the Paris region.
France and Germany announced a sovereign AI partnership with Mistral and SAP in November 2025. AI for public administration. Rollouts starting this year.
Canada's $2 billion versus France's $109 billion.
That's not a difference in degree. That's a difference in kind.
One country has an AI compute strategy. The other country has AI compute infrastructure.
US Export Controls Apply to Allies
In March 2026, US officials drafted regulations that would require government approval for exports of advanced AI chips to any country in the world.
Including allies. Including Canada.
Shipments under 1,000 GPUs need basic review. Medium deployments need pre-approval.
Clusters of 200,000 or more require intergovernmental agreements with national security commitments.
Canada's ally status provides political assurance.
It does not provide legal or structural independence.
For any provider trying to build new GPU capacity in Canada — the kind of capacity Canada's AI strategy assumes will exist — these licensing requirements create real delays.
The country that designs the chips decides who gets them.
Right now, that country is the United States.
What I'm Doing About It
I can't fix Canada's compute infrastructure gap. I'm one person.
But I can be honest about what the landscape actually looks like and help founders make the decision with real information instead of marketing.
I'm running my own workloads on DigitalOcean Toronto. $2.99/hr. Canadian jurisdiction.
Bring my own model. For my use case — content production, open-source model experimentation — residency is sufficient.
I don't need full sovereignty. Most startups don't either.
For the businesses I work with that do need it, I point them to the Canadian-owned options and I'm honest about the trade-offs.
The sovereignty spectrum isn't a binary. It has tiers. The right answer depends on what you're building, what data you're handling, and how much accountability you're willing to own.
Most of the content out there pretends it's a simple choice. It's not. And the hardware shortage just made it harder.
The Gap
For the last two years, the AI sovereignty conversation has been about data. Where does it live. Who can see it. Which jurisdiction governs it.
That conversation assumed you could always get the hardware.
That compute was a commodity you could buy whenever you needed it.
That assumption broke this year.
Hardware sovereignty — the ability to procure, deploy, and control AI-capable compute within your own jurisdiction — is now the binding constraint. Not data policy. Not AI regulation.
Whether you can physically get the chips.
No consumer-market jurisdiction has reliable procurement access to high-memory AI hardware right now.
The only actors with guaranteed AI sovereignty are the ones who secured datacenter-scale GPU allocations years in advance.
The countries that did that are the United States and France.
Canada is not on that list.
And a $2 billion strategy built on top of infrastructure that doesn't exist won't change that until the infrastructure actually gets built.
The question isn't whether Canada has an AI strategy.
It's whether Canada has AI infrastructure.
Right now, those are two very different things.

