Yegge's OSS maintainer post hits different when you've actually had to review AI-generated PRs. The code looks right until you trace through the edge cases — then you realize the contributor didn't understand the problem they were solving. Classic case of AI making it easier to submit garbage at scale.
Vercel shipped two pieces today that feel like the real deal. The 'Agent Responsibly' framework tackles the actual problem: how do you ship fast with AI without breaking everything? And the Turborepo optimization is what 10x looks like — 96% faster builds on massive monorepos. That's infrastructure that compounds.
Meanwhile, the cosmic endowment calculation feels like intellectual masturbation. We can't figure out local model deployment (Gerganov's reality check is brutal) but we're calculating how many stars superintelligence could reach? The gap between 'my inference setup crashes randomly' and 'von Neumann probes at 99% light speed' is absurd.