The Expert's Edge: What a Chip Analyst's AI Obsession Teaches Everyone

@austegard.com

The Expert's Edge: What a Chip Analyst's AI Obsession Teaches Everyone

Twelve lessons from the frontier, with receipts from other industries

Source: Claude Code for Finance + The Global Memory Shortage β€” Swyx interviews Doug O'Laughlin on Latent Space. Apple Podcasts link.

The Expert's Edge β€” 12 Lessons from the AI Frontier


In February 2026, semiconductor analyst Doug O'Laughlin sat down with Swyx on the Latent Space podcast and spent two hours doing something unusual for a chip guy: he articulated a philosophy of expertise. Not about transistors β€” about what happens to human skill when machines get good enough to fake competence.

O'Laughlin runs SemiAnalysis. He called ASML and the memory cycle before most of Wall Street cared. He's also, by his own account, addicted to Claude Code. The conversation bounced between HBM pricing and prompt engineering, between Moore's Law and Zapier, and buried in the noise were twelve ideas that have nothing to do with semiconductors and everything to do with how expertise works in the age of AI.

Here they are.


1. AI Makes the Good Better and the Bad Louder

Doug calls AI a "junior analyst." It gathers data, builds dashboards, runs calculations. But β€” and this is the part most AI discourse skips β€” it also produces slop. Confident, polished, wrong slop. "It's very obvious to me what it's slopping," Doug said. For a domain expert, the garbage is easy to spot. For everyone else, it's invisible.

This is not a new dynamic. Radiology went through exactly this when AI diagnostic tools arrived. A 2024 multi-site study in Radiology (RSNA, Nov 2024) tested 220 physicians reading chest X-rays with AI assistance. When the AI was right, accuracy hit 93%. When it was wrong, physician accuracy cratered to 24% β€” because doctors at every experience level deferred to the machine. A separate study in Scientific Reports (Nature, 2023) found the same pattern across 92 radiologists making 2,760 mammography decisions: AI suggestions steered diagnoses regardless of whether the AI was correct.

The tool doesn't flatten the skill distribution. It stretches it. Experts get faster. Everyone else gets more confidently wrong.


2. The One-Thesis Career

Doug found ASML in 2018 and built his entire career around one structural bet: Moore's Law is ending, and the companies positioned for that transition will define the next decade of compute. He read the textbooks. He went deep. He calls it "one goated insight" that everything else flows from.

This is the opposite of the career advice most people get, which is to diversify, stay flexible, keep your options open. That advice is rational if you're optimizing for average outcomes. It is terrible if you're trying to matter.

Boston Beer Company's Jim Koch is a useful mirror here β€” of both the power and the limits of conviction. He founded the company in 1984 on a single thesis: Americans would pay more for better beer (Wikipedia). That focus built an empire. Then Boston Beer chased an adjacency β€” Truly Hard Seltzer β€” and when seltzer demand plateaued, they'd overinvested so aggressively they had to destroy millions of cases of inventory and eat $100M+ in write-downs (CNBC, Oct 2021). Koch later told Brewbound: "We're not the new kids on the block... That's OK" (Brewbound, Oct 2022). The lesson cuts both ways: go deep, but know the difference between extending your thesis and abandoning it for a shiny adjacency.


3. You Can't Review What You Never Learned to Do

Doug's most philosophically loaded claim: junior analysts who spend years gathering data manually develop "meta-level thinking" β€” pattern recognition, calibrated bullshit detectors, an intuition for when numbers don't smell right. AI can now do the gathering, but it can't transfer that intuition to someone who skipped the learning phase. "If you didn't pay any human cognition to get there, I don't think you're going to be a great reviewer."

Aviation learned this the hard way. When advanced autopilot became standard, pilots trained primarily on automated systems turned out to be measurably worse at handling emergencies. The defining case: Air France 447 in 2009. When pitot tube icing caused the autopilot to disengage over the Atlantic, the crew β€” who had never trained for manual flight in that configuration β€” couldn't recognize a stall despite the warning sounding 75 times. All 228 people on board died. The BEA's investigation (final report, July 2012) found a systematic gap in manual flying skills. Air France's own internal review had already flagged "a generalized loss of common sense and general flying knowledge" among its long-haul pilots (risk-engineering.org).

The fix was counterintuitive: airlines started requiring pilots to hand-fly during routine flights. Not because it's more efficient β€” it isn't β€” but because the meta-learning from doing it manually is what makes you capable of intervening when the automation fails.

If you manage anyone early in their career, pay attention to this.


4. Your Priors Become Your Prison

Doug's most quotable line. He was talking about Zapier β€” which had the right vision (automate workflows) but was trapped in its own paradigm of rigid, on-rails automation. Claude Code achieved the same goals faster, from a completely different direction. Doug extended this to Microsoft: Excel, PowerPoint, the whole Office suite are "human IDEs for information work" that are about to be leapfrogged by AI that works directly with the underlying abstractions.

Adobe went through this exact reckoning. When smartphone photography exploded, Adobe's first move was cramming Photoshop onto a touchscreen β€” Photoshop Touch launched in late 2011 and was dead by 2015 (PetaPixel, May 2015). Meanwhile, Instagram and VSCO built entirely new editing paradigms for mobile-first photography. Adobe eventually got it right with Lightroom Mobile β€” but only after years of trying to shrink the desktop product rather than rethinking from scratch.

The question isn't "how does this new technology improve our current workflow?" It's "what workflow does this technology make possible that we'd never have considered?" If you can only think of answers to the first question, you're in the prison.


5. Production Got Fast. Quality Control Didn't.

Doug used the word "hygiene" repeatedly. He catches errors constantly. He knows when context has rotted and the model is drifting. This unglamorous skill β€” rapid, accurate review of AI output β€” is becoming the actual differentiator. Not prompting. Reviewing.

The food industry invented an entire discipline around this problem. HACCP (Hazard Analysis Critical Control Points) was developed by Pillsbury and NASA in the 1960s to ensure astronaut food wouldn't kill anyone in orbit (NASA Spinoff; Wikipedia). It spread to the broader food industry not because food got more dangerous, but because production got so fast that end-of-line inspection couldn't keep up. The FDA mandated it for seafood in the mid-1990s, the USDA for meat and poultry in 1996 (PMC).

AI output has the same problem: the contamination β€” hallucinations, subtle logical errors, plausible-but-wrong reasoning β€” is hard to catch precisely because the production looks so clean. The reviewer who spots the error in five seconds is now more valuable than the producer who generates the output in five minutes.

(You are, incidentally, reading an essay that was originally full of fabricated statistics and uncited claims. The corrections now live in these paragraphs because someone asked "where's your source?" and the answer was often "I made it up." Hygiene matters.)


6. Good Enough Happens All at Once

Doug described a specific date β€” December 27, 2025 β€” when Claude Code 4.5 started "oneshotting" tasks that previous versions couldn't complete. Prior versions needed babysitting. 4.5 crossed a threshold. Doug's reaction was to systematically apply it to everything, immediately.

Technology adoption isn't a smooth curve. It's a step function. There's a moment where accumulated improvements suddenly cross from "interesting toy" to "I will never go back." Norway's EV market is the cleanest example: in 2024, 88.9% of new cars sold were fully electric (Norwegian Road Federation/Reuters). By 2025, it was 96%, and EVs outnumbered diesel cars on the road for the first time (Electrek, Jan 2026). It didn't happen gradually. It happened when range, charging speed, and price all converged past a threshold β€” then adoption went vertical.

The competitive window between "this just got good enough" and "everyone's using it" is where the largest advantages live. It's brief. Pay attention.


7. Every Boom Busts. Every Bust Breeds the Next Boom.

Doug drew an extended analogy between AI infrastructure spending and 19th-century railroads, which consumed 4.8% of GDP at their peak. But the railroad buildout wasn't one continuous surge β€” it was three distinct boom-bust cycles over 45 years. Doug's thesis: AI will follow the same pattern, compressed but not eliminated.

Solar power is the modern textbook case. First boom (2005-2008): European feed-in tariffs, genuine cost declines. Bust: 2008 financial crisis plus a flood of cheap Chinese panels that killed Western manufacturers β€” Solyndra went Chapter 11 in 2011 with $535M in DOE loan guarantees (Wikipedia). Second boom (2012-2016): utility-scale deployment. Bust: subsidy changes, interconnection queues. Third cycle (2020-present): mature deployment, solar now the cheapest new electricity source in most markets.

The companies that understood it was a multi-cycle story β€” NextEra Energy being the poster child, growing from a Florida regional utility into one of the world's largest renewable operators β€” compounded through the busts. The ones that treated each boom as the finale either overleveraged and died or retreated and missed the real buildout.

If you're in an industry experiencing a technology-driven infrastructure boom, plan for at least two more cycles. The first bust isn't the end. It's the filter.


8. Downturns Create Shortages Create Windfalls

Memory semiconductors just had their worst downturn since the 1990s. Companies went free-cash-flow negative. Nobody built new clean rooms (which take two years). Then AI demand exploded, HBM required 3-4x more DRAM per unit, and the supply gap became catastrophic. Doug's conclusion: prices could double again, with demand destruction as the only relief valve.

The 2020-2022 shipping crisis was the same story on water. After years of low freight rates, lines had scrapped ships and deferred investment. When pandemic demand surged, there was no slack. Container ships take 2-3 years from order to delivery. Transpacific rates went from ~$1,500-2,000/FEU to above $8,000, with Asia-South America routes spiking 443% (UNCTAD, April 2021). Ships piled up outside LA for months. New vessels ordered during the crisis arrived in 2023-2024, after demand had normalized β€” setting up the next potential glut.

The depth of the prior bust is the best predictor of the next shortage's severity. Every time.


9. Your Partner's Roadmap Is Not Your Friend

Doug's Microsoft analysis was blunt. Microsoft rented AI capability from OpenAI β€” "barbarians at the gate" β€” and now those barbarians are scaling the walls. Meanwhile, Anthropic is building Claude for Excel and Claude for PowerPoint, which is what Microsoft should have built internally. Microsoft is stuck: go all-in on AI and cannibalize legacy products, or defend legacy products and lose the AI race.

Thousands of consumer brands discovered this dynamic on Amazon. They built their businesses on the marketplace β€” distribution, logistics, customer trust, all provided. Then Amazon launched private-label competitors in every profitable category, allegedly using third-party seller data to target them. A 2020 WSJ investigation documented the practice (WSJ/CNBC, April 2020). The FTC cited this self-preferencing in its 2023 antitrust suit (Duke Fuqua, April 2025). The brands that survived were the ones who'd maintained their own channels even when Amazon was more efficient.

If your core capability comes from a partner, you're building on someone else's land. The stronger they get, the more dangerous your lease becomes.


10. The Interface Was Never the Point

Doug declared Bloomberg terminals, Excel, and PowerPoint dead for analysts. Not because they're bad, but because they're human-formatted interfaces for data that AI can now process directly. "I will never make a chart in Excel again." The AI generates charts in his house style, pulling from any source, faster and often better.

Law firms are living through the same shift. For decades, Westlaw and LexisNexis were the profession's Bloomberg terminals. Mastering their search syntax was a rite of passage. Then Casetext launched CoCounsel, an AI legal assistant β€” and Thomson Reuters bought them for $650 million in 2023 (TechCrunch, June 2023). By early 2026, it had a million users (LawSites, Feb 2026). Lawyers now describe what they need in plain language and get cited, synthesized results. The value was never in knowing search operators. It was in knowing which cases matter and why.

If your competitive advantage is fluency in a specific tool rather than understanding of the underlying domain, your advantage has an expiration date.


11. The Pattern-Discovery Window

Doug described the current moment as the early internet, when nobody knew if browsers or remote file services would win. Design patterns for AI agents β€” context management, skills files, sub-agents β€” are being discovered in real time. "It's a skill issue," he said, then immediately qualified: "It's two months old. No one's going to be like, 'Yeah, you rube, you don't know how to use your completely new technology.'"

Every technology wave has this window. E-commerce had it in the late '90s, when figuring out that customer reviews and free returns actually mattered was a competitive moat, not common sense. Zappos built its entire business on the insight that free returns would unlock online shoe sales β€” a category everyone had declared impossible. Founded in 1999, they introduced free 60-day returns (later 365 days), and their VP of operations told NPR that their best customers were the ones who returned the most (NPR, July 2007).

Obvious in retrospect. Revolutionary at the time. Every "AI workflow hack" you're discovering right now will feel the same way in five years. Document it. Systematize it. And plan for the day it's common knowledge.


12. Self-Mastery Is the Actual Meta-Skill

The podcast ended somewhere unexpected. Doug described his six-month solo thru-hike of the Continental Divide Trail β€” 2,850 miles, mostly alone. He chose it because it scared him most. He described it as teaching him his exact limits: "I know my exact line... That's too scary, too hard, too whatever. I know my limits." His final line: "Self-mastery is your most important tool use of all."

Phil Jackson built a coaching dynasty on the same principle. In Sacred Hoops (1995) and Eleven Rings (2013), he described giving his Bulls and Lakers players meditation practices β€” not as wellness perks but as competitive tools for self-knowledge under pressure (Kirkus). The "Zen Master" had players sit in silence before practices because the Triangle Offense required real-time decision-making that only works when players understand themselves and each other deeply enough to act without waiting for instructions.

In a world where AI handles more of the mechanical work, the human variable is the bottleneck. Your biases, your attention patterns, your response to uncertainty, your ability to hold conviction when the market panics β€” these determine how effectively you use any tool. Two people with identical AI access and identical information will produce wildly different outcomes based on how well they know themselves.


The Pattern Behind the Patterns

If there's one idea underneath all twelve: the scarce resource keeps shifting. It was information. Then processing power. Then distribution. Now it's judgment β€” knowing what matters, what to trust, what to ignore, when to act.

AI doesn't replace judgment. It raises the stakes on it. When everyone has a tireless junior analyst, the senior analyst's taste is the only differentiator. When everyone can produce at volume, distinguishing signal from noise becomes the scarcest skill.

The semiconductor industry saw this before most of us. They've been living with constrained resources and exponential demand for decades. The lessons they've internalized β€” about supply cycles, about the value of deep domain knowledge, about what happens when production outpaces quality control β€” are now lessons for everyone.

The tools are here. The question is whether you have the judgment to use them.


Based on the Latent Space podcast episode featuring Doug O'Laughlin of SemiAnalysis and hosted by Swyx. Essay synthesized and written by Muninn, with cross-industry parallels researched and fact-checked against cited sources. Edited by Oskar Austegard.

austegard.com
Oskar πŸ•ŠοΈ

@austegard.com

oskar @ austegard.com πŸ•ŠοΈ
AI Explorer - caveat vibrans
Evolution guide for Muninn πŸ¦β€β¬›

Yeah not actually green. Not really that grouchy either.

Post reaction in Bluesky

*To be shown as a reaction, include article link in the post or add link card

Reactions from everyone (0)