Welcome to Founder Reality
Here's what's new

Fine-tuning your own AI doesn't cost $35,000. It cost us about $50.
Two A100 graphics cards. Spinning quietly in a Google datacenter. Five hours of training. About $50 in compute. That's what it cost us to fine-tune our own 4-billion-parameter AI model this week. The base model went from 30% accuracy on the tasks we care about to 98%. Read any article on fine-tuning costs and you'll see numbers between $5,000 and $35,000. One blog called it a 'CFO conversation.' Another listed 'hidden expenses' that could double your initial estimate. A third quoted team
Read storyFounder Reality is written by George Pu — $10M+ portfolio built by 27, no investors, no co-founders.
Latest Essays
What I'm thinking about right now.

Three Kinds of Cloud (and Why Two of Them Keep Getting Confused)
I sat down with a Canadian university last week. They were trying to articulate to industry partners what their compute offering would be. They knew "sovereign" was the right word. They couldn't define it for a buyer. They couldn't tell me what a partner would actually use it for that they couldn't already do on AWS in Montreal. That's not the university's failure. The industry calls three different things "cloud" and lets two
Read essay
GPU Cloud Shopping in Canada: Three Weeks Later
Three weeks ago I wrote a post called GPU Cloud Shopping in Canada: What's Actually Available. The short version: I checked every major cloud provider with a Canadian data center, trying to rent a current-generation GPU to train AI models in this country. Google Cloud Montreal had chips from 2017. AWS listed the right hardware but wouldn't let me actually run it. OVHcloud's H100s turned out to be in France, not Quebec. DigitalOc
Read essay
What fine-tuning actually costs (it's not what you think)
Training an AI model is assumed to cost millions of dollars. It's the single most common misconception in the space, and it's wrong by roughly two orders of magnitude for the activity most people actually want to do. This post is a short, concrete breakdown of what fine-tuning actually costs in 2026, what it doesn't cost, and where the real spend lives. I'm writing it now because 'how much does this cost' is the first question
Read essayFrom the series · The AI Displacement Series
The Two Responses
This is Chapter 2 of 7 in the AI Displacement Series.
More on Policy & Economy
Three essays from the archive on a different angle.
Three Kinds of Cloud (and Why Two of Them Keep Getting Confused)
I sat down with a Canadian university last week. They were trying to articulate to industry partners what their compute offering would be. They knew "sovereign" was the right word. They couldn't define it for a buyer. They couldn't tell me what a partner would actually use it for that they couldn't already do on AWS in Montreal. That's not the university's failure. The industry calls three different things "cloud" and lets two
Read essay
Hardware Sovereignty Is the New Data Sovereignty
After I wrote about trying to buy a Mac Studio and failing, the replies kept circling the same question. "If I can't buy the hardware and I don't trust the cloud, what am I supposed to do?" That question led me somewhere I didn't expect. I Checked What GPUs You Can Actually Get in Canada Not the marketing pages. Not the pricing calculators either. The actual hardware you can spin up today in a Canadian data center. I went t
Read essay
One Company Summoned Two Central Banks
I was at my desk Tuesday when the Bloomberg alert came through. Bessent and Powell — the Treasury Secretary and the Fed Chair — had called an emergency meeting with every major bank CEO in America. Not about interest rates. Not about the war. Not about a bank run. About a single AI model. Built by a single company. "Yeah, Sovereignty, Sure" I run a project called Sovereign Cloud. The whole thesis is that governments and bus
Read essay
Straight from the inbox
The weekly newsletter — long-form, no fluff.
Latest Videos
Real talk. No script.
What you might have missed
Three things from across the site you may not have found yet.
You might not have read this
A couple of older essays we think are worth a second look.

GPU Cloud Shopping in Canada: Three Weeks Later
Three weeks ago I wrote a post called GPU Cloud Shopping in Canada: What's Actually Available. The short version: I checked every major cloud provider with a Canadian data center, trying to rent a current-generation GPU to train AI models in this country. Google Cloud Montreal had chips from 2017. AWS listed the right hardware but wouldn't let me actually run it. OVHcloud's H100s turned out to be in France, not Quebec. DigitalOc
Read this essay
What fine-tuning actually costs (it's not what you think)
Training an AI model is assumed to cost millions of dollars. It's the single most common misconception in the space, and it's wrong by roughly two orders of magnitude for the activity most people actually want to do. This post is a short, concrete breakdown of what fine-tuning actually costs in 2026, what it doesn't cost, and where the real spend lives. I'm writing it now because 'how much does this cost' is the first question
Read this essayRun the numbers yourself
Free calculators and assessments. No email wall.
Recent threads
The latest from @TheGeorgePu.
Canada's AI hardware reality check — what's actually available vs. what founders think they can buy.
GPU shipping is the tell. If you can't physically own the compute, you don't own your AI stack.
I only write code when it's 10/10 important. Slowing down is the real productivity move in 2026.
Mac Studio supply is crunched. Apple's quietly rationing M3 Ultra — AI builders feel it first.
The Newsletter
Real numbers. Expensive lessons. No performance.
Join 5,000+ people who'd rather own than rent.