Back to all essays
Own Your TechSovereign Compute

GPU Cloud Shopping in Canada: What's Actually Available

·8 min read
George Pu
George Pu$10M+ Portfolio

27 · Toronto · Building to own for 30+ years

GPU Cloud Shopping in Canada: What's Actually Available

Availability and pricing in this post reflect what I found as of April 6, 2026. Cloud providers update their infrastructure regularly, and I hope some of this changes soon. If you find something different, I'd genuinely love to hear about it.


I chose Canada.

That might sound like a strange way to start a post about cloud computing, but it matters.

I'm not Canadian by birth. I came here, built my life here, built my company here. This isn't a temporary arrangement for me. I'm committed to this country for the long term.

So when I decided to build AI — real AI, not a wrapper on someone else's API — I wanted to do it here. Canadian company. Canadian data. Canadian compute.

I assumed the infrastructure would be there. Canada has a federal AI strategy. A compute access fund. Sovereignty frameworks.

We host MILA, one of the best AI research labs in the world. The talent is here. The policy ambition is here.

Then I tried to actually rent a GPU.

What I Was Looking For

My company needs GPU compute for AI model training.

Not a side project. Real training runs on current-generation chips — H100s, the same hardware that every serious AI lab in the world is using right now.

The requirements were simple. I wanted GPUs physically located in Canada. Current generation. Available on demand. Clean invoicing.

Shouldn't be hard, right? This is 2026. Canada is supposed to be an AI country.

I spent the better part of two weeks checking every cloud provider I could find with Canadian data centers. Google Cloud. AWS. DigitalOcean. OVHcloud.

A non-profit in Alberta. I called sales teams. Dug through documentation pages. Spun up test instances. Read fine print.

What I found made me angry. Then worried.

Google Cloud Montreal: 2017 Called

Google Cloud has two Canadian regions. Montreal and Toronto.

I started with Montreal, their primary Canadian region. The one you'd expect to have the full range of services.

Here's what's actually available for GPU compute in Montreal:

The NVIDIA P4. Released in 2017.

And the NVIDIA T4. Released in 2018.

That's it.

No H100. No A100. No L4. No V100. Nothing released in the last four years. No TPUs. Not a single piece of current-generation AI training hardware.

I checked this multiple times because I genuinely didn't believe it.

I went to Google's own GPU locations page, filtered for northamerica-northeast1, and stared at the screen.

P4 and T4. In 2026.

For context, a single zone in Iowa — just one of dozens of US zones — has H200s, H100s in multiple configurations, A100s, L4s, and everything in between.

Iowa alone has more GPU variety than all of Canada combined on Google Cloud.

And here's the part that really got under my skin.

If you go to third-party pricing comparison tools — the sites founders use to make infrastructure decisions — they list H100 pricing for Montreal.

The SKU exists in Google's system. The price shows up. It looks like you can rent an H100 in Montreal for $9.80 an hour.

You can't.

The pricing page and the availability page tell two completely different stories. One is marketing. The other is reality.

And if you don't know to check both, you'll plan your entire infrastructure around hardware that doesn't exist where you think it does.

Google Cloud Toronto: Close, But Not Quite

Toronto, Google's second Canadian region, is slightly better. It has L4 GPUs and a limited H100 configuration called "A3 Edge."

But A3 Edge is specifically designed for inference and serving — running already-trained models. Google's own documentation says this.

If you need to actually train a model, which is the compute-intensive part, Toronto doesn't help you either.

So across both Canadian regions, Google Cloud offers exactly zero options for current-generation AI model training.

Zero.

AWS Canada: Better, But You'll Still End Up in the US

AWS's Canadian region is better than Google's. I'll give them that.

But the full range of their latest GPU instances — the P5 machines with H100s, the configurations you need for serious training — are concentrated in US regions. Canada gets partial support.

I couldn't get a definitive answer on exactly which current-generation instances are available in ca-central-1, which is its own kind of problem.

If it's hard to figure out what you can actually rent, something's off.

If you need to train at scale on AWS, you're almost certainly routing through Virginia or Ohio. Your data leaves Canada. Your workloads run on American soil.

For a company trying to build sovereign AI infrastructure, "your best option is to send everything to the US" is not an answer.

OVHcloud: The One That Almost Fooled Us

OVHcloud is a French company with a data center in Beauharnois, Quebec. On paper, a great Canadian option.

Established company. Competitive pricing. H100 GPUs listed on their website.

I was ready to allocate budget.

Then I checked which GPUs are actually in their Canadian data center.

V100 and V100S. Previous-generation hardware from 2017 and 2018.

The H100s on their pricing page? Gravelines, France.

Not Canada. France.

If I hadn't dug into region-specific availability — if I'd just trusted the headline pricing like most busy founders would — we would have committed real money to a provider that cannot deliver what we need in the country we need it in.

This keeps happening. The marketing says one thing. The infrastructure says another. And nobody's talking about the gap.

ISAIC: Hope, But Not Yet

ISAIC is a non-profit in Edmonton, connected to the University of Alberta. They claim H100 access at $2.50 CAD per hour.

I wanted this to work more than any other option on this list.

A Canadian non-profit offering affordable AI compute to Canadian companies?

That's exactly what should exist. That's the kind of infrastructure that would make Canada's AI ambitions real instead of theoretical.

But when I evaluated them like I'd evaluate any provider we're trusting with production workloads, it fell apart.

No public pricing page. No SLA. No uptime guarantees. "Best effort" service — their words. No backups — also their words. Portal-based VM access. Website built on Wix.

Want the full playbook? I wrote a free 350+ page book on building without VC.
Read the free book·Online, free

I'm not saying this to be harsh. I genuinely hope they grow into something real. But we're planning training runs that cost five figures and take weeks.

"Best effort" and "no backups" aren't things I can bet my company on.

We emailed them asking if they can support production workloads at scale. I'm still hoping the answer is yes.

DigitalOcean: The Part I Still Can't Believe

Here's where the story takes a strange turn.

DigitalOcean — the company most people associate with $5 droplets and student projects — has H100 GPUs in their Toronto data center.

At $2.99 USD per hour.

They also have L40S at $1.57. RTX 6000 Ada at $1.57. RTX 4000 Ada at $0.76 — less than a dollar an hour for a GPU you can prototype on.

All in Toronto. All confirmed and verified.

A company best known for beginner-friendly web hosting has better Canadian GPU infrastructure than Google Cloud.

Google Cloud. A trillion-dollar company. With two Canadian regions. And the best they can offer for AI training in this country is a chip from 2017.

Meanwhile, DigitalOcean — headquartered in New York, a fraction of Google's size — put H100s in Toronto. The catch is capacity.

There are reports of their Toronto GPUs selling out. Smaller providers have less inventory, and when it's gone, it's gone.

But at least the hardware exists. At least someone saw the demand.

The Full Picture

Here's what GPU cloud shopping in Canada actually looks like as of the date of this writing:

Google Cloud Montreal:

P4 and T4 only. Hardware from 2017-2018. No H100. No A100. No TPU.

Google Cloud Toronto:

L4 and limited H100 for inference only. Cannot train models.

AWS Canada:

Partial GPU support. Latest-generation training hardware limited and unclear compared to US regions.

OVHcloud Canada:

V100/V100S only. H100s are in France, not Beauharnois.

ISAIC Alberta:

Unverified. Non-profit university sandbox. No SLA, no backups.

DigitalOcean Toronto:

H100 at $2.99/hr, L40S at $1.57/hr, RTX 4000 Ada at $0.76/hr. Capacity uncertain but hardware confirmed.

That's it. That's the entire landscape for a G7 country that calls itself an AI leader.

Why I'm Not Just Frustrated — I'm Worried

This isn't a niche infrastructure complaint. This is a structural problem.

Every Canadian AI startup faces this same wall. If you can't get current-generation GPUs in Canada, you have two choices.

Send your workloads to the US — meaning your data leaves the country, you lose data residency, and you're competing for capacity against every American tech company.

Or use hardware that's generations behind and accept slower, more expensive training.

Neither option is acceptable for a country that says it wants to compete in AI.

Canada's federal government has an AI Compute Access Fund. It's a good program. I believe in the intent behind it.

But what good is a compute fund when the compute doesn't physically exist in Canada?

We're writing checks for infrastructure that isn't here.

I sit in meetings where people talk about Canadian AI sovereignty. Everyone nods. Everyone agrees it matters.

Then you go home, open your laptop, try to spin up an H100 in Montreal, and find out the best Google can offer you is a chip from the year the iPhone X came out.

The gap between rhetoric and reality is enormous. And it's not closing.

What I Think Is Happening

I don't think Google and AWS are ignoring Canada on purpose. I understand the business logic.

GPU hardware is scarce and expensive. Deploying thousands of H100s to a data center is a multi-million dollar commitment.

The US market is bigger. US demand is insatiable. If you're Google and you have 10,000 H100s to allocate, are you putting them in Iowa where demand is guaranteed, or Montreal where the market is smaller?

From their perspective, the answer is obvious.

But that's exactly the problem. Market logic alone will never give Canada competitive AI infrastructure.

The market is always going to prioritize the US. If Canada wants sovereign AI capability, it can't just wait for Google and Amazon to decide we're worth the investment.

Something has to change. Whether that's government incentives for domestic GPU deployment, Canadian cloud providers stepping up, or entirely new models for compute access — something.

Because right now, the country that's home to Yoshua Bengio and MILA and the Vector Institute can barely offer its own companies a current-generation GPU through any major cloud provider.

Where We Landed

We're building multi-cloud. DigitalOcean Toronto is our primary for Canadian workloads — if capacity holds. GCS and AWS in US regions as backup for training jobs that need scale we can't get domestically.

It's not what I wanted.

I wanted to build AI in Canada, on Canadian infrastructure, with Canadian data residency, supporting the Canadian AI ecosystem.

Instead, I'm relying on a New York company's Toronto data center as my best option and routing my largest jobs through Iowa.

I chose this country. I believe in it. I think Canada has real advantages in AI — the talent, the research community, the policy thinking.

But the hardware has to show up. The infrastructure has to be real, not marketing. The compute has to actually exist in the data centers, not just in the pricing pages.

Until it does, every Canadian AI founder is making the same compromise I am. And most of them don't even know it, because they trusted the pricing page.

Prices and availability verified as of April 6, 2026, using official provider documentation and direct testing. Cloud infrastructure changes regularly — I encourage you to verify current availability before making decisions. If any of this has changed since publication, I'd love to hear about it.