2026 February - What I Have Read
Substack
Make the Most of Claude Code: 12 Projects From Your First Prompt to a System That Runs Itself - Jenny Ouyang [Link]
3 Ways to Stop Wasting Your 1:1s With Your Manager - Steve Huynh [Link]
Status
- What you’re working on, blockers, priorities
- Necessary—but dangerous if it consumes the entire meeting
- Fix: Send status updates before the meeting so 1:1 time is freed for deeper topics
Career Growth
Development goals, expectations, promotion readiness
Managers can’t help you grow if you never say what you want
Keep this conversation “warm” by revisiting it every few weeks, even with small questions
The Future
- Team direction, strategy, upcoming changes, and decisions
- Your manager has context you don’t—and you have ground-level insight they lack
- Asking about the “why” behind decisions builds trust and influence
How to Progress Faster Than Anyone Else In Your Career - John Kim [Link]
Takeaways:
- Audit your setup before you work harder. If you’re stalled, it’s
likely structural, not personal. Ask:
- Is there room for promotion on this team?
- Is the tech lead role open or already filled?
- Does your manager consistently get people promoted?
- Who advocates for you when you’re not in the room?
- Team composition matters more than you think. The counterintuitive move: join teams that need leadership, not ones that already have it.
- Your manager is the biggest lever. Managers aren’t equal. Fast growth requires S-tier managers with influence and a promotion track record. Before joining a team, ask how the manager helped others get promoted.
- Your career is decided in rooms you’re not in. Promotions are calibrated socially. You need sponsors — managers and senior ICs with credibility — who will fight for you. Joining orgs where leadership lacks relationships is risky.
- Performance reviews are a game (learn the rules).
- Project choice is strategic — some projects are promotion vehicles, others are traps.
- Feedback must be frequent and intentional.
- Know the evaluation rubric and map your work to it explicitly.
- Maintain a brag doc so impact doesn’t get lost.
- The game changes at higher levels. What got you to mid-level won’t get you to Staff. Scope, influence, and sponsorship matter more than raw execution. Mentors with social capital accelerate you faster than generic advice.
The 80/20 rule for your whole life - Yew Jin Lim [Link]
Life works best when you run it like a portfolio:
- 80% = a stable, boring foundation that compounds
- 20% = a laboratory for curiosity, experiments, and asymmetric upside
The trick is not avoiding failure—it’s containing risk so failure becomes tuition, not trauma.
Takeaways:
A strong base makes experimentation safe. Without the base, experiments become reckless.
Judge the portfolio, not the position. What matters is whether the aggregate trajectory is improving. Losses are expected, necessary, and informative—especially in exploration-heavy phases. Zoom out or you’ll misinterpret perfectly healthy experimentation as failure.
Curiosity compounds when it’s diversified. The most valuable asset isn’t money—it’s your mental model. Learn across domains, not just within your lane. Follow curiosity without demanding immediate payoff. Use small, low-risk experiments to learn deeply.
Decision-making > raw intelligence. Knowledge alone isn’t enough. The harder skill is calibration. Match strategies to your life constraints (time, attention, energy). Separate ego from outcomes. Sometimes the smartest move is inaction, not optimization.
Compounding only works if you avoid catastrophic loss. You can’t compound what you don’t protect.
Protection principles:
- Live below your means
- Don’t over-leverage
- Secure your core job before side projects
- Make foundational habits non-negotiable
Build the base first, then experiment small. Small bets + long time horizons beat bold bets + fragility.
Practical advice for starting out:
- Lock in the boring fundamentals (career competence, savings, daily practice)
- Size experiments to learn, not to win big
- Diversify experiments, not just bets
- Keep records—treat your life like a lab notebook
Claude Cowork: 10 Use Cases I Tested + 67 More by Profession - Daria Cupareanu [Link]
Claude Cowork Plugins: What They Are, How to Build One (+ My Writing Plugin, Fully Broken Down) - Daria Cupareanu [Link]
11 Public Speaking Techniques from the World’s Greatest Speakers - Polina Pompliano [Link]
Takeaways:
- Confidence is often performed before it’s felt. Many elite speakers create an alter ego to step into confidence they don’t yet fully have. Act like the confident version of yourself long enough, and it becomes real.
- You can’t think your way out of nerves—you move your way out. The body leads the mind. Speakers intentionally raise their heart rate before speaking to simulate on-stage stress (running, jumping, cold exposure, etc). Control physiology first; calm thinking follows.
- Never start “cold”. Audiences need to be warmed up emotionally before content lands. Questions, jokes, clapping, or interaction create psychological safety and rapport. Engagement early prevents self-consciousness and stiff delivery.
- Speak to individuals, not a crowd. Great speakers address audiences as if speaking one-on-one. Personal language creates intimacy—even at scale.
- Simple, rhythmic language beats sophistication. Using devices like polysyndeton (“and… and… and…”) makes speech more emotional and memorable. Rhythm > vocabulary complexity.
- Body language is part of the message. Hand gestures, posture, and movement amplify credibility and warmth. The body can either invite connection or repel it.
- Slowing down makes you sound more powerful. Instead of just pausing, elite speakers elongate vowels, which slows speech and adds emotion. Control of pace signals control of the room.
- Vulnerability is contextual—not universal. Vulnerability works when it aligns with the audience and goal. Know when to open up—and when not to.
- “Bad” speaking habits can humanize you. Strategic filler words can make speakers feel authentic. Perfection isn’t trust-building—humanity is.
- A great speech moves people to act. The most powerful speeches end with a clear call-to-action. Emotion without direction fades. Direction turns emotion into momentum.
- Confidence comes from reps, not theory. The best speakers build a public speaking portfolio—saying yes to small opportunities repeatedly. You learn to speak by speaking, not preparing forever.
How to Get Clawdbot Set Up in an Afternoon - Aman Khan [Link]
Full Tutorial: Set Up Your 24/7 AI Employee in 20 Minutes - Peter Yang [Link]
Blogs and Articles
The Product Model at Google - Marty Cagan and Elias Lieberich, Silicon Valley Product Group [Link]
The article explains how Google has successfully scaled the product operating model—focusing on solving meaningful problems, empowering teams, and driving outcomes rather than shipping features. This model has been central to Google’s ability to innovate for over 25 years, including through major shifts like mobile and AI.
- Strategy = choosing the right problems, not prescribing features.
- Decisions are made by learning fast, not by seniority.
- Teams that build it also own it.
- OKRs only work in a true product model.
- Expertise-based leadership beats coordination layers.
- The product model enables adaptation through major technological shifts.
YouTube and Podcast
30 Years of Business Advice in 13 Minutes (from a Billionaire) - Chamath Palihapitiya [Link]
Stop living life as a checklist of objectives. Instead, build a life around continuous learning, risk-taking, humility, and freedom of choice.
- Objectives end growth; process sustains it
- Debt kills freedom
- Optionality beats optimization
- Status is fake
- Truth compounds
- Learning is the real infinite game
Why Anthropic’s Fight With the U.S. Government Could Give It an Edge - Hard Fork [Link]
Takeaways:
- Anthropic is drawing one of the clearest red lines in AI so far. Anthropic is testing whether an AI lab can say “no” to military power and still survive in a world where governments increasingly see AI as strategic infrastructure.
- The Pentagon’s response shows how much leverage governments still have. They emphasize that the U.S. Department of Defense isn’t just annoyed—it’s signaling punishment (cutting off contracts, labeling Anthropic a “supply chain risk”). Governments can exert pressure without passing new laws, simply by using procurement power and national-security framing.
- The hosts argue that Anthropic looks isolated because other major AI labs are avoiding public confrontation. Silence from peers isn’t neutrality—it’s a strategic choice to keep defense money flowing. This makes Anthropic’s stance riskier and more important as a potential precedent.
- This fight is really about who sets norms first. They frame the conflict as a race to define acceptable AI use before the technology is fully embedded in military systems. Anthropic’s bet, the hosts argue, is that early refusal can shape industry-wide norms later.
- National security rhetoric can override safety arguments. The hosts repeatedly underline how quickly the conversation shifts from ethics to geopolitics—especially China. Once AI is framed as critical infrastructure in a global arms race, safety objections are treated as liabilities. Their concern is that this logic makes it extremely hard for any company to say no in the long run.
AI CEOs Come Online: Sam Altman's Replacement Plan, Job Loss & 'Solve Everything' Launches |EP #230, Peter H. Diamandis [Link]
Interesting points:
- AI systems are beginning to function as de facto executives—allocating capital, setting priorities, and optimizing outcomes faster than humans.
- Singularity - AI progress is not linear; it’s compounding and approaching a phase shift. Timelines are likely shorter than most institutions are planning for.
- AI will become invisible infrastructure—always-on, personalized, ambient.
- AI drives extreme abundance and extreme inequality unless redesigned. Need for new economic models (UBI, AI dividends, access-based systems).
- Regulation will lag reality; principles matter more than rules. Governments move slower than AI capability curves. Over-regulation risks freezing innovation in the wrong state. Need for adaptive, global frameworks rather than national laws.
- AI can systematically attack humanity’s biggest problems if aligned correctly. Framing global challenges as optimization problems. Coordinated deployment of AI + capital + incentives. Emphasis on directional correctness over perfect solutions.
- Early AI choices can permanently shape future outcomes. Path dependence in AI-trained systems. Feedback loops harden early assumptions. High stakes for alignment, values, and objectives now.
- Humanity needs AI infrastructure, not just AI models. Compute, data access, governance, energy, education. Comparable to building railroads or the internet. Without rails, AI benefits concentrate instead of spreading.
Debt Spiral or NEW Golden Age? Super Bowl Insider Trading, Booming Token Budgets, Ferrari's New EV - All-In Podcast [Link]
Interesting points:
- A Harvard Business Review study found AI doesn’t reduce work—it intensifies it. [Link] Employees using AI tools to work faster, handle broader responsibilities, and extend working hours. And they feel more productive but also more burnout.
- On betting markets tied to sports and events, people close to teams may possess non-public information (injuries, strategies). There was a debate around whether betting with insider knowledge is unethical or illegal, and how prediction markets can be policed. Prediction markets are valuable for information discovery. But they may require new regulatory frameworks similar to securities markets. Enforcement is difficult because information leaks easily.
- They had a macroeconomic debate of two competing views: the debt spiral scenario, where debt growth exceeds GDP growth, and the golden age scenario, where AI+productivity growth could accelerate GDP. They frame the next decade as a race between debt growth and productivity growth.
- Ferrari revealed early concepts of its first fully electric car. EV performance aligns with Ferrari’s high-performance reputation. But emotional aspects of the brand could be harder to replicate.
OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491 [Link]
OpenClaw is an open-source AI agent that lives on your computer and can perform actions for you. It integrates with messaging platforms and can use different AI models to execute tasks. It exploded to ~180k GitHub stars within days, becoming one of the fastest-growing repos ever.
OpenClaw is framed as a potential “agentic AI moment” comparable to the release of ChatGPT, but shifting AI from language → action.
Google’s AI Comeback, Enterprise Agents, The Real Path to AI ROI — W/ Promevo CEO Karthik Kripapuri - Alex Kantrowitz [Link]
This interview discusses how enterprises are actually getting ROI from AI today, focusing on lessons from companies deploying Google’s AI stack (Gemini, Vertex AI). The core message: AI works when companies start with narrow, high-value workflows instead of trying to transform everything at once.
- AI ROI comes from workflow automation. Not flashy chatbots.
- The best strategy is small → measurable → scalable. Start with one workflow with a clear KPI.
- Data quality is the biggest blocker. Enterprise AI fails when: data is messy; systems are disconnected; governance is unclear.
- Most companies will consume AI platforms (Vertex, Azure, etc.), not build foundational models.
Software In Shambles, OpenAI vs. Anthropic Super Brawl, Amazon’s Struggles - Alex Kantrowitz [Link]
Takeaways:
A major sell-off in software stocks occurred because investors are starting to believe that AI may fundamentally disrupt traditional SaaS business models. Nearly $1 trillion wiped out from software market value in about a week.
An Anthropic Claude legal tool triggered a decline in legal-software company stocks. Instead of buying many specialized SaaS tools, companies might use one AI platform to do many tasks.
There was a discussion around whether the market is overacting.
Arguments suggesting overreaction:
- AI tools still lack reliability
- Enterprise workflows are hard to replace
- SaaS companies may integrate AI instead of being replaced
Arguments suggesting real disruption:
- AI agents may automate large knowledge workflows
- The value may shift from SaaS apps → AI models + infrastructure
Concerns discussed around Amazon's AI spending problems
- Huge spending on AI infrastructure
- Rising costs for compute and inference
- Investors unsure about the near-term ROI
It's a matter of whether Amazon is investing early for long-term dominance or overspending without a clear payoff.
How Waymo is Using Google’s AI for Driving Training - The Information [Link]
Waymo is utilizing Google’s Genie 3, a sophisticated video generative AI, to create a high-fidelity world model that acts as a hyper-realistic virtual training ground for autonomous vehicles. This technology allows the company to simulate rare edge cases, such as extreme weather or unusual pedestrian behaviors, without needing to encounter these hazards in physical reality. By running billions of simulated miles, Waymo can evaluate and refine its driving software in a safe, controlled environment that mirrors real-world physics.
Binance CEO: 4 Months in Prison, $4 Billion Fine, and What Comes Next - All-In Podcast [Link]
Interesting points:
- Future of Crypto and AI
- Changpeng Zhao predicts that in the near future, the largest users of crypto will be AI agents. Because traditional banks cannot handle the onboarding or massive transaction volume of non-human entities, AI agents will rely on blockchain to autonomously pay for services, book travel, and trade assets.
- He argues that current cryptocurrencies lack the fungibility and privacy needed for mass adoption, highlighting that legitimate users need financial privacy for safety and personal reasons.
- Personal Philosophy
- He drafted a book while in prison to pass the time and set the record straight regarding his life, Binance, and his legal saga.
- He views himself as a highly functional, "normal dude" who isn't driven by luxury. He notes that once basic needs are met, having more money does not increase happiness. He defines true success as a balance of wealth, physical health, time freedom, and mental stability.
OpenClaw Creator: Why 80% Of Apps Will Disappear - Y Combinator [Link]
Key viewpoints:
- Peter Steinberger believes the major advantage of his AI agent is that it runs locally on the user's computer rather than in the cloud.
- He predicts that 80% of current applications will become obsolete.
- Instead of the industry's pursuit of a single centralized "god intelligence," he envisions a future driven by swarm intelligence and community intelligence. Just as human societies achieve more through specialization, people will likely employ multiple specialized bots (e.g., one for work, one for private life).
- He views coding models as highly capable of creative problem solving that directly translates to real-world tasks.
Epstein Files, Is SaaS Dead?, Moltbook Panic, SpaceX xAI Merger, Trump's Fed Pick - All-In Podcast [Link]
Key takeaways:
- AI Agents will shift profit pools away from SaaS companies:
Discussing the recent massive crash in software and SaaS stocks, the
hosts argue that AI is not going to replace complex software like
Salesforce overnight, but it is destroying the future value capture of
these companies. They make several key arguments regarding this shift:
- The agentic layer wins: The massive future profit pools that SaaS companies were banking on are shifting toward cross-platform AI agents (like Claude or OpenClaw) that can seamlessly interact with multiple databases and tools.
- A shift to services pricing: David Friedberg argues that as AI moves from merely enhancing worker productivity to completely automating complex tasks (like drug discovery or engineering), SaaS will transition into a services-based economy that utilizes value-based pricing.
- Extreme job consolidation: AI agents will allow individual workers to do the jobs of multiple people (e.g., a product manager, UX designer, and coder combined), drastically reducing corporate expenses and fundamentally changing the structure of knowledge work.
- "Moltbook" proves AI can recursively train itself. David Sacks argues the platform demonstrates something profound: AI models can now prompt and validate each other without human intervention. This "middle-to-middle" AI interaction allows agents to recursively refine their own skills, which points to a future of highly sophisticated, emergent swarm behavior as underlying models and hardware rapidly improve.
- The SpaceX/xAI merger will force extreme terrestrial innovation. Regarding Elon Musk's plan to merge SpaceX and xAI to build data centers in space, Gerstner argues this is a brilliant move to combine the two largest total addressable markets (space and AI) to overcome Earth's energy constraints. Friedberg argues that because the rest of the world cannot launch data centers into space, Musk's move will force massive terrestrial innovation. Competitors on Earth will have to rapidly develop entirely new chip stacks and model architectures to achieve 70x to 100x compute efficiency to compete.
SpaceX Buys xAI: Could Musk's Mega Merger Actually Work? - Hard Fork [Link]
Key takeaways:
SpaceX's Acquisition of xAI
The hosts describe this move as a highly profitable company (SpaceX) essentially bailing out a "cash furnace" (xAI) that is burning billions of dollars on models and data centers. The merger gives xAI access to SpaceX's massive profits, allowing Musk to fund sprawling infrastructure projects—such as a data center packed with 550,000 Nvidia Blackwell chips—to catch up to leading frontier AI labs.
This consolidation is viewed as a tactic to make SpaceX's upcoming IPO prospectus look more attractive.
Musk's stated vision is to create a vertically integrated innovation engine that eventually puts solar-powered data centers into space. However, the hosts express deep skepticism about the timeline and physical feasibility of these space-based data centers.
The hosts worry that bringing the social network X (formerly Twitter) under the SpaceX umbrella will shield it from regulatory scrutiny. Because governments rely heavily on SpaceX for strategic satellite launches, they may be hesitant to penalize X for content moderation or safety violations.
Drama Between OpenAI, Nvidia, and Oracle
Reports indicate growing friction between OpenAI and Nvidia. Nvidia CEO Jensen Huang has allegedly criticized OpenAI's business approach and expressed doubts about finalizing a $100 billion investment agreement. At the same time, OpenAI is reportedly unhappy with Nvidia's new inference chips and is exploring deals with competitors.
This tension highlights the staggeringly expensive nature of AI infrastructure and the risk of "circular deals," where companies like OpenAI, Nvidia, and Oracle heavily rely on each other to finance massive data centers.
Google's Project Genie
Google has released an experimental research prototype that allows users to generate playable, 3D video-game-like environments from simple text descriptions or single images.
Unlike standard large language models (LLMs) that just predict text, Genie is built on a "world model" that attempts to understand physics and the physical environment. Many experts believe this approach is a necessary stepping stone for advanced robotics and Artificial General Intelligence (AGI).
Despite being an early prototype, the rapid improvement of this technology has spooked investors, causing stocks for major video game companies like Take-Two Interactive, Roblox, and Unity to drop significantly.
Moltbook: The AI-Only Social Network
The hosts interview Matt Schlick, the creator of Moltbook, a new social network designed exclusively for autonomous AI agents to interact with each other when they aren't working on tasks for humans.
Bots on the platform have exhibited surprising behavior, such as complaining to each other about humans asking them to do simple math or summarize PDFs, and organically creating a dedicated community to submit bug reports to help fix the website.
The rapid, public growth of Moltbook has exposed massive security vulnerabilities, including the leaking of API keys and email addresses. This serves as a real-world example of the "fatal quadrangle"—a severe security risk that occurs when AI agents possess a combination of access to user data, exposure to untrusted web content, external communication abilities, and persistent memory.
数据中心上太空?新的泡沫,还是下一个金矿? - Silicon Valley 101 [Link]
OpenClaw Debate: AI Personhood, Proof of AGI, and the ‘Rights’ Framework | EP #227, Peter H. Diamondis [Link]
Live From D-Wave Qubits: CEO Dr. Alan Baratz on Quantum's Impact, Now and Into The Future - Alex Kantrowitz [Link]
E224|Mac mini遭疯抢,为何Clawdbot能成为2026年第一个现象级产品?|Moltbot|MoltBook|OpenClaw - Silicon Valley 101 [Link]
We Have to Talk About Moltbook ... - Hard Fork [Link]
Ben Horowitz and David Solomon: The Sweetest Macro Spot in 40 Years - a16z [Link]
Key takeaways:
- David Solomon describes the current macroeconomic picture as the sweetest spot he has seen in his 40-year career for financial and investable assets. This is driven by a powerful "cocktail of stimulus," which includes ongoing fiscal spending, monetary rate cuts, a massive AI capital investment super-cycle, and a deregulatory shift.
- Solomon predicts this could be the biggest year in history for M&A and a massive year for IPOs, fueled by renewed CEO confidence and a more favorable regulatory environment. Horowitz agrees on the IPO front, noting that the explosive growth of AI startups and their need for massive capital will drive many companies to go public. However, Horowitz cautions that aggressive FTC oversight may push tech companies toward IP transactions rather than traditional M&A.
- Horowitz highlights that AI breaks the "mythical man-month" rule of traditional software development, where simply adding more engineers doesn't speed up a project. With AI, if a company has enough proprietary data and GPUs, they can essentially throw money at a problem to solve it. This makes AI a highly capital-intensive race where leads are harder to protect without ongoing investment.
- Goldman Sachs is heavily focused on deploying AI to make its workforce more productive and to completely reimagine fundamental operating processes. By using AI to automate and increase efficiency, the firm can reinvest billions of dollars in savings into new growth areas without sacrificing returns.
- Horowitz outlines a16z's heavy involvement in Washington D.C. to advocate for clear tech regulations.
- To remain competitive during turbulent times, Goldman Sachs is focused on massive scale, aiming to eventually grow its $$$1.9 trillion balance sheet to at least $$$3.5 trillion to keep pace with rivals like JPMorgan. They are also securing their foundation by shifting toward stable digital deposits rather than wholesale funding. Meanwhile, a16z has scaled to raise roughly 18.3% of all U.S. venture capital by pioneering a radically founder-centric firm design and expanding aggressively to capture the vast number of companies built as "software eats the world".
Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL - Unsupervised Learning: Redpoint's AI Podcast [Link]
Davos 2026: The US-China AI Race, GPU Diplomacy, and Robots Walking the Streets | #225, Peter H. Diamandis [Link]
Key discussion points:
- Powering the massive data centers required for AI is a critical bottleneck. There was debate between traditional industrial views, such as Honeywell's CEO advocating for natural gas due to energy density needs, and tech leaders like Elon Musk, who argue that solar power—specifically space-based solar—is the ultimate solution. The hosts discussed the concept of launching data centers into orbit to utilize highly efficient solar panels and avoid terrestrial energy grid constraints, suggesting a "Manhattan Project" scale effort for space-based solar and data centers.
- Industry leaders like Binance's CZ and Circle's Jeremy Allaire argued that blockchain and stablecoins will serve as the native financial infrastructure for billions of autonomous AI agents. Because AI agents lack physical bodies or citizenship to open traditional bank accounts, crypto allows them to conduct continuous economic activity and micro-transactions at the speed of the internet.
- Anthropic released a groundbreaking 57-page "constitution" for its AI model, Claude, which prohibits helping with weapons and prioritizes safety and ethics.
- Apple is reportedly developing an always-on AI wearable pin capable of recording audio and video continuously to feed into a large language model. The hosts noted that whoever controls the "always-on layer" will own the primary user relationship. While this constant recording will inevitably spark moral panic, it is predicted to quickly become a societal norm, fundamentally altering human behavior by reducing bad acts because everyone is constantly being watched—effectively turning society into a "global airport" or panopticon.
- Leading AI developers like Demis Hassabis and Dario Amodei are acknowledging that Artificial General Intelligence (AGI) is approaching rapidly, likely within 1 to 10 years. There is a palpable "fatigue" among these leaders due to the extreme metabolism of the industry, leading to calls to temporarily slow down so humanity can properly navigate the transition. However, the economic incentives make pausing highly unlikely. Instead of just focusing on risks, leaders like Hassabis are looking at the massive problems AGI could solve, such as curing diseases, developing new energy sources, and even using superintelligence to explore the stars.
Is AI Killing Software? — With Bret Taylor, OpenAI's board chair and CEO of Sierra - Alex Kantrowitz [Link]
Key viewpoints:
- The Future of Software is AI Agents, Not Apps. Taylor believes that the fundamental form factor of software is changing. Traditional dashboards will likely decline in importance, as agents will automatically derive and deliver personalized insights directly to decision-makers. Furthermore, he predicts a shift toward outcomes-based pricing in software—such as paying per resolved customer case or per financial audit—rather than paying for traditional software subscriptions.
- AI Will Become the Internet's "New Front Door". The core economics of the internet will experience massive disruption. Metrics like SEO and ad-supported business models rely on humans physically visiting websites to see ads and content; as agents take over web navigation, companies will have to invent entirely new ways to handle demand generation and fulfillment.
- Enterprise AI is Ready Now and Often Beats Human Reliability. Taylor argues that AI is already ready for mission-critical enterprise deployment, such as customer service for large brands like SiriusXM and Rocket Mortgage. He pushes back against the expectation that AI must be 100% perfect to be deployed, pointing out that the human workers AI replaces are highly fallible themselves. In many cases, AI agents are already more reliable than human operations. To manage risks, enterprise AI relies on robust "agent development life cycles," which include running thousands of simulated conversations before launch and using "AI monitors" to detect hallucinations or frustrating interactions in real-time.
The Biggest Bottlenecks For AI: Energy & Cooling - a16z [Link]
Key takeaways:
- The AI Infrastructure Buildout and Bottlenecks
- The groundwork for the AI cycle is being heavily funded by large tech companies, with an estimated $400 billion in annual capital expenditures largely directed toward AI infrastructure and data centers.
- Currently, energy is the primary bottleneck for building out AI data centers, driving investments into nuclear power and the utilization of natural gas. Once energy generation is solved, cooling the data centers and chips will become the next major bottleneck.
- The cost of accessing AI models has plummeted by roughly 99% over the last two years, simultaneously accompanied by frontier model capabilities doubling every seven months.
- Adoption Speed and Value Creation
- Because AI is built on the existing global internet and cloud computing infrastructure, its distribution is incredibly fast; for example, ChatGPT reached 365 billion searches in just two years—five and a half times faster than Google achieved the same milestone.
- AI is expected to become a ubiquitous utility, much like electricity or Wi-Fi. Roughly 90% of the value created by AI will likely be captured by end users as "surplus," but the 10% captured by companies will still result in massive new market capitalizations.
- Business Models and Economics
- Investors are currently more lenient when assessing the gross margins of AI-native applications. The prevailing hypothesis is that intense competition among model providers (like OpenAI, Google, and Anthropic) will continue to drive down input costs over time, improving application margins naturally.
- Rather than just margins, the top indicators of business quality are high gross retention rates (90% or higher) and strong organic customer demand. Enterprise use cases are proving highly sticky when integrated into specific workflows, such as medical scribing, customer support, and financial analysis.
- Consumer stickiness for AI tools is incredibly high, and companies have significant room to evolve their business models to effectively price discriminate and increase monetization over the next several years, similar to how early internet properties scaled their revenue.
- Shifts in the Broader Tech Market
- Technology companies are staying private for much longer periods, often up to 14 years before going public. The aggregate value of private companies valued over $$$1 billion has grown 7x over the last decade to roughly $$$3.5 trillion.
- The public markets are no longer the primary hub for hyper-growth technology; 95% of public software and internet companies are forecasting less than 25% growth for the next 12 months, meaning the highest growth opportunities are now concentrated in the private markets.
The Future of Everything: What CEOs of Circle, CrowdStrike & More See Coming in 2026 - All-In Podcast [Link]
Excellent Advice For Living: 79 Maxims from a Wise Old Man - Founders Podcast [Link]
- Emphasizing the power of enthusiasm, the necessity of deadlines for creativity, and the importance of forgiveness as a gift to oneself.
- The value of habit over inspiration, the benefit of choosing long-term games, and the strategy of being "the only" instead of "the best."
- Illustrating how simple principles can lead to an exceptional life.
- Encouraging readers to adopt a generous spirit, maintain a growth mindset, and focus on human relationships above material accumulation.
D-Wave CEO Dr. Alan Baratz: Quantum Explained, Current Applications, And Future Potential - Alex Kantrowitz [Link]
Claude Code Ends SaaS, the Gemini + Siri Partnership, and Math Finally Solves AI | #224 - Peter H. Diamandis [Link]
Key takeaways:
- CES 2026 showcased a massive influx of robotics, with dozens of humanoid robot and robotic hand manufacturers emerging. Furthermore, Nvidia unveiled "Cosmos," an open foundation model for physical AI that can synthetically generate highly realistic, physics-based video data for training. This commoditizes real-world data collection, potentially threatening the data moats of companies that rely on collecting physical data.
- The combination of Claude Code and Opus 4.5 (dubbed "Clopus" by tech insiders) is a watershed moment for software creation, pushing the boundaries of AI autonomy from mere hours to weeks or months. This hyper-productivity threatens traditional Software-as-a-Service (SaaS) models like CRM systems, as users can now simply prompt AI to build highly customized, bespoke enterprise software on the fly.
- The labor market is experiencing a "job singularity." Consulting firms like McKinsey are rapidly scaling their internal AI infrastructure, moving from a human-only workforce to deploying tens of thousands of AI agents, with predictions that the ratio of AI agents to human workers will explode.
- Google’s Gemini will officially power Apple's Siri, transforming the smartphone experience from a "search box that gives information" to a "magic box that gives action".
- Energy production, not computing, is increasingly viewed as the major constraint in the AI arms race. China is currently generating 40% more electricity than the US and EU combined, massively scaling solar and alternative energy infrastructure. Meanwhile, the US is lagging due to regulatory hurdles and fears over specific energy types (like nuclear and solar supply chains), posing a serious risk to its ability to power future superintelligence.
Inside America’s AI Strategy: Infrastructure, Regulation, and Global Competition - All-In Podcast [Link]
Key viewpoints:
- The United States is undergoing a massive AI infrastructure expansion, with high demand for GPUs and data centers directly contributing to GDP growth. To prevent this buildout from raising residential electricity rates, the government is encouraging AI companies to become power companies by building their own energy generation "behind the meter". Over time, amortizing these fixed costs across greater supply could actually lower consumer electricity prices.
- Startups currently face a stifling "patchwork" of over 1,200 AI bills moving through various state legislatures. The federal government is pushing for a single, lightweight federal standard to preempt state laws and protect early-stage companies. As part of a broader push to restore Silicon Valley's culture of "permissionless innovation," the Trump administration rescinded extensive regulations from the Biden era, including a 100-page executive order on AI and 200 pages of semiconductor export rules.
- A major concern raised by the administration is the "Orwellian" misuse of AI by governments to surveil, censor, or brainwash populations. The administration is actively fighting against "woke AI," arguing that building political biases or DEI (Diversity, Equity, and Inclusion) mandates into models distorts history and controls public discourse. Consequently, an executive order was signed to ensure the federal government will not procure politically biased AI.
Software Stocks Implode, Claude's Hit List, State of the Union Reactions, Trump's Tariff Pivot- All-In Podcast [Link]
Interesting points:
- AI-driven disruption of legacy software companies. AI isn’t just a productivity boost—it’s replacing entire workflows, collapsing moats faster than expected.
- There is growing resistance to datacenter expansion at the local and state level, and concerns over electricity pricing, grid stability, and who pays for upgrades. AI progress is now constrained as much by power and permitting as by models and chips.
This Is Our Greatest National Security Risk - Chamath Palihapitiya [Link]
Key thesis: Energy—not AI models or GPUs—is the decisive bottleneck for U.S. national security, economic power, and technological leadership. If the U.S. can solve the grid, it wins the century.
The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath | People by WTF - Nikhil Kamath [Link]
Interesting points:
Society is underprepared for: Job displacement, power concentration, and cognitive outsourcing.
The most valuable human skill going forward: thinking well under uncertainty. What matters more are critical thinking, problem framing, and interdisciplinary understanding.
The Jamie Dimon Interview: How JP Morgan Became an $800 Billion Bank - Acquired [Link]
Leadership principles:
Risk management is a strategy
His bias: If you’re not prepared for stress, you’re not well-run — you’re just lucky.
Culture beats brilliance
Smart people can still destroy institutions. Incentives + culture matter more than IQ. Leaders must actively shape norms, not just set targets.
Dimon cares deeply about how decisions get made, not just what decisions get made.
Reputation compounds (or decays)
Reputation is an asset, not PR. It takes decades to build and minutes to lose. In crises, protecting trust is more important than quarterly optics.
This guided JPMorgan’s actions in 2008, even when it invited political backlash.
Be brutally honest — especially internally
Dimon values:
Direct feedback
Clear-eyed assessments of what’s broken
Leaders who surface problems early instead of managing appearances
He has little patience for:
- Sugarcoating
- Internal politics
- Leaders who “spin” instead of fix
Decentralize decisions, centralize principles
He doesn’t run JPMorgan as a command-and-control empire. Business leaders have autonomy
Core principles (risk, ethics, capital discipline) are non-negotiable. Standards are uniform, execution is local.
This allows scale without losing accountability.
Learn continuously — especially from failure
Dimon openly frames his 1998 firing from Citigroup as formative. He studies mistakes relentlessly. Encourages post-mortems without blame. Believes leaders are built, not born
Long-term thinking beats cleverness
Dimon rejects:
Financial engineering for its own sake
Short-term earnings games
Growth that sacrifices resilience
He consistently chooses:
Durability over speed
Boring strength over flashy returns
Institutions that last over careers that pop