We earn commissions when you shop through links on this site — at no extra cost to you. Learn more

Welcome to Founder Reality

Here's what's new

Your ChatGPT and Claude Conversations Are Court Evidence
George's Takes

Your ChatGPT and Claude Conversations Are Court Evidence

Greg Brockman's journal became Exhibit 161 this week. The next chapter writes itself. Someone's ChatGPT history becomes Exhibit 162. That sentence sounds like speculation. It isn't. The infrastructure is already in place. The court orders are already in place. The only thing missing is a famous enough defendant for the headline to break the way Brockman's did. The court order most people haven't read In May 2025, Magistr

Read story

Founder Reality is written by George Pu — $10M+ portfolio built by 27, no investors, no co-founders.

Fresh Off the Press

Latest Essays

What I'm thinking about right now.

All Essays
GPU Cloud Shopping in Canada: Three Weeks Later
Own Your Tech

GPU Cloud Shopping in Canada: Three Weeks Later

Three weeks ago I wrote a post called GPU Cloud Shopping in Canada: What's Actually Available. The short version: I checked every major cloud provider with a Canadian data center, trying to rent a current-generation GPU to train AI models in this country. Google Cloud Montreal had chips from 2017. AWS listed the right hardware but wouldn't let me actually run it. OVHcloud's H100s turned out to be in France, not Quebec. DigitalOc

Read essay
What fine-tuning actually costs (it's not what you think)
Own Your Tech

What fine-tuning actually costs (it's not what you think)

Training an AI model is assumed to cost millions of dollars. It's the single most common misconception in the space, and it's wrong by roughly two orders of magnitude for the activity most people actually want to do. This post is a short, concrete breakdown of what fine-tuning actually costs in 2026, what it doesn't cost, and where the real spend lives. I'm writing it now because 'how much does this cost' is the first question

Read essay
Why I chose Unsloth (before training a single token)
Own Your Tech

Why I chose Unsloth (before training a single token)

Honest note up front: I have not yet fine-tuned anything with Unsloth. I have not run a single training job. What I did is spend three weeks researching fine-tuning frameworks before writing a line of training code — and at the end of that research, I picked Unsloth and committed to it. This post is about why. I'm writing it now, before I start, for two reasons. First, so that if this decision ages badly I have to own it public

Read essay

From the series · The AI Displacement Series

The Two Responses

This is Chapter 2 of 7 in the AI Displacement Series.

Deeper Dive

More on Policy & Economy

Three essays from the archive on a different angle.

Browse all
Watch

Latest Videos

Real talk. No script.

All Videos

The Newsletter

Real numbers. Expensive lessons. No performance.

Join 5,000+ people who'd rather own than rent.