2026 February - What I Have Read
Substack
Make the Most of Claude Code: 12 Projects From Your First Prompt to a System That Runs Itself - Jenny Ouyang [Link]
3 Ways to Stop Wasting Your 1:1s With Your Manager - Steve Huynh [Link]
Status
- What you’re working on, blockers, priorities
- Necessary—but dangerous if it consumes the entire meeting
- Fix: Send status updates before the meeting so 1:1 time is freed for deeper topics
Career Growth
Development goals, expectations, promotion readiness
Managers can’t help you grow if you never say what you want
Keep this conversation “warm” by revisiting it every few weeks, even with small questions
The Future
- Team direction, strategy, upcoming changes, and decisions
- Your manager has context you don’t—and you have ground-level insight they lack
- Asking about the “why” behind decisions builds trust and influence
How to Progress Faster Than Anyone Else In Your Career - John Kim [Link]
Takeaways:
- Audit your setup before you work harder. If you’re stalled, it’s
likely structural, not personal. Ask:
- Is there room for promotion on this team?
- Is the tech lead role open or already filled?
- Does your manager consistently get people promoted?
- Who advocates for you when you’re not in the room?
- Team composition matters more than you think. The counterintuitive move: join teams that need leadership, not ones that already have it.
- Your manager is the biggest lever. Managers aren’t equal. Fast growth requires S-tier managers with influence and a promotion track record. Before joining a team, ask how the manager helped others get promoted.
- Your career is decided in rooms you’re not in. Promotions are calibrated socially. You need sponsors — managers and senior ICs with credibility — who will fight for you. Joining orgs where leadership lacks relationships is risky.
- Performance reviews are a game (learn the rules).
- Project choice is strategic — some projects are promotion vehicles, others are traps.
- Feedback must be frequent and intentional.
- Know the evaluation rubric and map your work to it explicitly.
- Maintain a brag doc so impact doesn’t get lost.
- The game changes at higher levels. What got you to mid-level won’t get you to Staff. Scope, influence, and sponsorship matter more than raw execution. Mentors with social capital accelerate you faster than generic advice.
The 80/20 rule for your whole life - Yew Jin Lim [Link]
Life works best when you run it like a portfolio:
- 80% = a stable, boring foundation that compounds
- 20% = a laboratory for curiosity, experiments, and asymmetric upside
The trick is not avoiding failure—it’s containing risk so failure becomes tuition, not trauma.
Takeaways:
A strong base makes experimentation safe. Without the base, experiments become reckless.
Judge the portfolio, not the position. What matters is whether the aggregate trajectory is improving. Losses are expected, necessary, and informative—especially in exploration-heavy phases. Zoom out or you’ll misinterpret perfectly healthy experimentation as failure.
Curiosity compounds when it’s diversified. The most valuable asset isn’t money—it’s your mental model. Learn across domains, not just within your lane. Follow curiosity without demanding immediate payoff. Use small, low-risk experiments to learn deeply.
Decision-making > raw intelligence. Knowledge alone isn’t enough. The harder skill is calibration. Match strategies to your life constraints (time, attention, energy). Separate ego from outcomes. Sometimes the smartest move is inaction, not optimization.
Compounding only works if you avoid catastrophic loss. You can’t compound what you don’t protect.
Protection principles:
- Live below your means
- Don’t over-leverage
- Secure your core job before side projects
- Make foundational habits non-negotiable
Build the base first, then experiment small. Small bets + long time horizons beat bold bets + fragility.
Practical advice for starting out:
- Lock in the boring fundamentals (career competence, savings, daily practice)
- Size experiments to learn, not to win big
- Diversify experiments, not just bets
- Keep records—treat your life like a lab notebook
Claude Cowork: 10 Use Cases I Tested + 67 More by Profession - Daria Cupareanu [Link]
Claude Cowork Plugins: What They Are, How to Build One (+ My Writing Plugin, Fully Broken Down) - Daria Cupareanu [Link]
11 Public Speaking Techniques from the World’s Greatest Speakers - Polina Pompliano [Link]
Takeaways:
- Confidence is often performed before it’s felt. Many elite speakers create an alter ego to step into confidence they don’t yet fully have. Act like the confident version of yourself long enough, and it becomes real.
- You can’t think your way out of nerves—you move your way out. The body leads the mind. Speakers intentionally raise their heart rate before speaking to simulate on-stage stress (running, jumping, cold exposure, etc). Control physiology first; calm thinking follows.
- Never start “cold”. Audiences need to be warmed up emotionally before content lands. Questions, jokes, clapping, or interaction create psychological safety and rapport. Engagement early prevents self-consciousness and stiff delivery.
- Speak to individuals, not a crowd. Great speakers address audiences as if speaking one-on-one. Personal language creates intimacy—even at scale.
- Simple, rhythmic language beats sophistication. Using devices like polysyndeton (“and… and… and…”) makes speech more emotional and memorable. Rhythm > vocabulary complexity.
- Body language is part of the message. Hand gestures, posture, and movement amplify credibility and warmth. The body can either invite connection or repel it.
- Slowing down makes you sound more powerful. Instead of just pausing, elite speakers elongate vowels, which slows speech and adds emotion. Control of pace signals control of the room.
- Vulnerability is contextual—not universal. Vulnerability works when it aligns with the audience and goal. Know when to open up—and when not to.
- “Bad” speaking habits can humanize you. Strategic filler words can make speakers feel authentic. Perfection isn’t trust-building—humanity is.
- A great speech moves people to act. The most powerful speeches end with a clear call-to-action. Emotion without direction fades. Direction turns emotion into momentum.
- Confidence comes from reps, not theory. The best speakers build a public speaking portfolio—saying yes to small opportunities repeatedly. You learn to speak by speaking, not preparing forever.
How to Get Clawdbot Set Up in an Afternoon - Aman Khan [Link]
Full Tutorial: Set Up Your 24/7 AI Employee in 20 Minutes - Peter Yang [Link]
Import AI 441: My agents are working. Are yours? - Jack Clark, Import AI [Link]
Jack Clark argues that we are moving from a world of AI as a tool to an ecology of autonomous agents. The core arguments are:
The Shift from "Model" to "Agent"
The author shows that AI is no longer just something you "chat" with; it is becoming a "fleet of minds" that works independently. Through Gemini math proof, he demonstrates that AI "agents" are now capable of complex, iterative reasoning that mimics and enhances human professional labor.
The Internet as a "Predator-Prey" Ecology
By including the Poison Fountain story, Clark highlights that as agents become the primary "users" of the internet, the digital environment will change. He views this as an evolutionary struggle. The agents are scraping and learning from everything. Humans are using tools like Poison Fountain to "pollute" the food supply (data) to protect human agency or slow down AI.
The point is that the internet is no longer just for humans to read; it’s a battlefield for automated intelligence.
The Need for New "Institutions"
The author uses Eric Drexler’s framework to provide a solution to the chaos of this new ecology. His point is that we cannot control a "superintelligence" if we think of it as a single, scary monster. Instead, we must build human-led institutions—structured processes of transparency and competition—that treat AI as a "pool of resources" or a set of services that can be managed, much like we manage a large corporation or a space program.
The "Data Efficiency" Warning (The Fictional Story)
The concluding "Tech Tale" serves as a warning: as models become more intelligent, they become "data efficient," meaning they can learn secrets (like the identities of their creators) from very tiny leaks. This ties back to his opening: if these agents are "multiplying" us, they are also becoming harder to "hide" from or contain.
My time at Amazon, Part I - Becca Selah [Link]
Blogs and Articles
The Product Model at Google - Marty Cagan and Elias Lieberich, Silicon Valley Product Group [Link]
The article explains how Google has successfully scaled the product operating model—focusing on solving meaningful problems, empowering teams, and driving outcomes rather than shipping features. This model has been central to Google’s ability to innovate for over 25 years, including through major shifts like mobile and AI.
- Strategy = choosing the right problems, not prescribing features.
- Decisions are made by learning fast, not by seniority.
- Teams that build it also own it.
- OKRs only work in a true product model.
- Expertise-based leadership beats coordination layers.
- The product model enables adaptation through major technological shifts.
Unlocking the Codex harness: how we built the App Server - Celia Chen, OpenAI [Link]
Introducing GPT‑5.3‑Codex - OpenAI [Link]
The Waymo World Model: A New Frontier For Autonomous Driving Simulation - Chiyu Max Jiang, Xander Masotto, Bo Sun, Waymo [Link]
The Waymo World Model is a high-fidelity, generative AI system designed to create hyper-realistic autonomous driving simulations. Developed in collaboration with Google DeepMind, it allows Waymo to test its "Driver" in complex, rare, and "long-tail" scenarios that are difficult to encounter in the real world.
AI Doesn’t Reduce Work—It Intensifies It - Aruna Ranganathan and Xingqi Maggie Ye, Harvard Business Review [Link]
The researchers identified three primary ways AI increases the burden on employees:
- Because AI makes complex tasks feel more accessible, employees often take on responsibilities outside their original job scope (e.g., designers writing code). This "vibe-coding" often creates more work for experts who must then review and fix AI-generated errors.
- AI reduces the "friction" of starting a task, leading workers to fill natural breaks—like lunch or the commute—with "quick prompts," resulting in a workday with no downtime.
- Users often run multiple AI agents or threads simultaneously, creating a high cognitive load and a constant sense of "juggling" tasks.
The study found that while employees felt more productive, they did not feel less busy. The speed of AI sets a new, faster "normal," raising expectations and creating a self-reinforcing cycle where higher speed leads to more work, which leads to a greater reliance on AI.
To prevent burnout and "workload creep," the authors suggest organizations implement an AI Practice consisting of:
- Structured moments to step back and assess assumptions before finalizing AI-assisted decisions.
- Batching notifications and protecting "focus windows" to prevent the fragmentation of attention.
- Prioritizing human connection and dialogue to counter the isolating effects of solo AI work and to foster genuine creativity.
OpenClaw, OpenAI and the future - Peter Steinberger [Link]
Designing for Transparent Screens - Google Design [Link]
Introducing Experiments in ElevenAgents - Kacper Walentynowicz, Lauren Rothwell [Link]
Key Capabilities
- A/B Testing: Create variants of agents with different prompts, voices, workflows, or knowledge bases to see what performs best.
- Controlled Routing: Define a specific percentage of live traffic to route to a new variant to ensure safe testing.
- Measurable Metrics: Track impact on business outcomes such as CSAT, Containment Rate, Conversion, and Latency.
- Easy Deployment: Once a winner is identified, users can "promote" the variant to full production with a clear audit trail and rollback options.
Statement from Dario Amodei on our discussions with the Department of War - Anthropic [Link]
Anthropic refuses to support two specific use cases, citing that they are incompatible with democratic values or current technological reliability:
- Mass Domestic Surveillance: Anthropic opposes using AI to automate the comprehensive tracking of U.S. citizens, arguing it poses a novel risk to fundamental liberties.
- Fully Autonomous Weapons: The company states that current frontier AI is not reliable enough to select and engage targets without human intervention ("out of the loop") and refuses to put warfighters or civilians at risk.
How Bots, Banking and Stablecoins Will Dominate Fintech in 2026 - Emily Mason, Paige Smith, Bloomberg [Link]
The financial landscape of 2026 is expected to be defined by a significant merger of traditional banking, cryptocurrency, and artificial intelligence. Numerous fintech firms are currently seeking national bank charters to gain direct access to federal payment systems and eliminate third-party intermediaries. Simultaneously, stablecoins are projected to become a primary medium for global commerce, with major credit card networks and neobanks adopting them for faster settlements. Technological advancements will likely introduce autonomous AI agents capable of negotiating and executing financial transactions independently for consumers. While regulatory hurdles and market bubbles remain potential risks, industry leaders anticipate a shift toward a more integrated, digital-first economic infrastructure.
6 Fintech Startup Predictions For 2026 - Alex Lazarow, Forbes [Link]
Interesting predictions:
- Instead of "launching everywhere," startups will focus on replicating proven business models in home markets. Local depth and regulatory navigation will become more valuable than broad geographic reach.
- The "software-only" model is fading. Successful AI companies will likely embed operations or use services as a "wedge" to deliver actual outcomes rather than just tools.
- The "Midas List" of top investors will continue to shift away from Silicon Valley as talent and category-defining companies emerge globally.
- Mergers and acquisitions will become a primary exit strategy. Incumbents and scaled tech firms will look to buy distribution, talent, and vertical capabilities rather than building them from scratch.
- AI allows small teams to reach significant milestones (revenue, customers) with very little capital, potentially skipping traditional Seed or Series A funding rounds.
- As the "AI honeymoon" ends, customers will demand real ROI. Startups that over-leveraged their valuations during the hype may face difficult "down rounds" if they haven't reached profitability.
Open Standards Will Unlock Agentic AI's Next Breakthrough in Fintech - Manik Surtani, Fintech Weekly [Link]
Main points:
- The Problem of Silos: Current fintech ecosystems are fragmented, with isolated data formats for payments, banking, and lending. This "silo" effect weakens an AI agent’s ability to observe, decide, and act confidently across different systems.
- The Solution: Open standards like the Model Context Protocol (MCP) allow AI systems to interact with real-world tools and data seamlessly.
- The Agentic AI Foundation (AAIF): Recently formed by Block, Anthropic, and OpenAI in partnership with the Linux Foundation, this body aims to establish the open standards necessary for AI agents to speak a "shared language."
- Industry Adoption: Major players are already adopting these
protocols:
- Stripe: Built MCP support for payment and subscription management.
- Square: Released MCP servers for its APIs.
- Shopify: Launched commerce integrations using the protocol.
- Future Vision: The next generation of fintech will feature specialized agents that collaborate—for example, a fraud detection agent working with a cash flow forecasting agent—to automate complex tasks like real-time budget reconciliation and vendor payment optimization.
When Payments Became Infrastructure: The Irreversible Shift of 2025 - Finextra [Link]
Main points:
- Payments are no longer just about faster checkout or convenience; they have become a prerequisite for participation in modern commerce. Institutions that failed to adopt real-time, automated, and AI-driven systems found themselves structurally and existentially constrained.
- Real-time payment volumes are exploding globally (projected to exceed 500 billion transactions by 2028). Systems like India’s UPI and Brazil’s Pix succeeded because they became dependable, interoperable, and "ambient"—working quietly in the background of daily life.
- In 2025, the focus shifted from how fast money moves to how confidently it arrives. Instant payouts became mechanisms of trust in sectors like the gig economy and logistics, improving worker retention and merchant cash flow. Leading institutions adopted "multi-rail" strategies to ensure resilience and redundancy rather than relying on a single payment path.
- Moved beyond consumer convenience into deep business infrastructure (e.g., healthcare and manufacturing workflows). The most successful versions are "invisible" and highly specialized.AI transitioned from advisory to operational, managing tasks like invoice validation and cash flow forecasting. The focus is now on "agentic AI" that is bounded, auditable, and transparent.
- The industry shifted from simple fraud blocking to fraud orchestration. This balances the need to stop cybercrime (estimated at $10 trillion annually) with the need to reduce "false declines," which can cost merchants 3–5% of their revenue.
The biggest fintech trends of 2025 - Finextra [Link]
AI transitioned from an abstract concept to a core component of financial institutions.
2025 is the "start of the stablecoin revolution" due to increased regulatory clarity and institutional interest.
Governments and regulators shifted toward policies designed to "unleash economic growth."
OpenAI bets big on audio as Silicon Valley declares war on screens - Connle Loizos, TechCrunch [Link]
Trends:
- OpenAI is reportedly overhauling its audio models to prepare for the launch of an audio-first personal device, expected in early 2026. This move involves unifying several engineering and research teams to create a more natural conversational experience.
- Meta recently updated Ray-Ban smart glasses with advanced directional listening features.
- Google is experimenting with "Audio Overviews" to turn search results into conversations.
- Tesla is integrating xAI’s Grok chatbot for hands-free vehicle control.
- Startups: New hardware like the Humane AI Pin, the Friend AI pendant, and upcoming AI rings (from companies like Sandbar) are all betting on audio as the primary interface of the future.
Plaud launches a new AI pin and a desktop meeting notetaker - Ivan Mehta, TechCrunch [Link]
21 Lessons From 14 Years at Google - Addy Osmani [Link]
Focus on people and problems; prioritize simplicity and clarity; execution and momentum; personal and professional growth.
Observations of Leadership (Part Two) - Hazel Weakly [Link]
Key observations:
- In large organizations, "operating at cross-odds" is inevitable and even desirable. Friction reveals complexities in the solution space that immediate alignment might miss.
- "Non-action" is an intervention of equal weight to action.
- "Technical debt" is a proxy for communication breakdowns. In one instance, she fixed tech debt not by writing code, but by improving visibility between Product and Engineering teams.
- She highlights the importance of looking for inflection points where incremental thinking fails. She notes the frustration of predicting major industry shifts (like supply chain risks) only to have them ignored until they become crises. However, being "early" allowed for a faster pivot once the organization's mental model finally caught up to reality.
Lord of War, meet Lord of Tokens: Torture-testing image models on design-agency grade work - Key Singh [Link]
Author Kay Singh uses a complex design task to investigate whether modern AI models can replicate high-level professional work that previously required weeks of manual labor by a design agency.
Meta Unveils Sweeping Nuclear-Power Plan to Fuel Its AI Ambitions - Jennifer Hiller, The Wall Street Journal [Link]
Prioritize Relatively - Andrew Bosworth [Link]
The author suggests that prioritization must always be relative.
- The Wrong Question: People often ask, "Is this a good/exciting idea?" Since most professional ideas are "good," this leads to endless debate without resolution.
- The Right Question: "Is this more valuable than the work we are doing right now?" This shift forces explicit trade-offs and clarifies why certain tasks are chosen over others.
- Saying "No" to Good Ideas: The hardest part of leadership isn't rejecting bad ideas; it's rejecting great ideas that simply aren't the best use of time at that moment.
- Compounding Value: He uses the example of infrastructure vs. new features. While a new feature is exciting, backend work that doubles team velocity "compounds" and may be the higher relative priority.
- Dynamic Lists: Priorities shouldn't be static. Sometimes, after finishing the top item, the best move is to go back and improve it further rather than moving to item #2.
Apple’s new Google Gemini deal sounds bigger, better than expected - Ryan Christoffel [Link]
The agreement confirms that the next generation of Apple Foundation Models will be based on Gemini and Google’s cloud technology. This partnership will power a more personalized and capable version of Siri, expected to roll out later this year.
Apple emphasizes that user data will remain protected. The AI will continue to run on-device and through Apple’s Private Cloud Compute, using Google's tech as a foundation without giving Google access to private user data.
The author notes that this is a significant "win-win." Apple gains world-class AI infrastructure to fix long-standing Siri issues, while Google secures a massive distribution platform for its Gemini technology.
Banks are not disrupted. - Simon Taylor [Link]
The universal bank model is being pulled apart by:
- Shadow Banks (Private Credit & Money Market Funds): Firms like Apollo and Blackstone take on the lending risks banks can't afford, while Money Market Funds and stablecoins like USDC are absorbing the "savings" and "payments" functions.
- The UX Layer: Companies like Mercury, Brex, and Revolut own the customer interface and workflow. They are evolving from "renting" bank charters to either getting their own or using stablecoin rails to bypass traditional systems.
- Universal Banks: The giants (e.g., JPMorgan, Citi) are becoming the "utility layer"—the safe, regulated pavement of the economy—but are no longer the sole drivers of innovation.
Key Industry News Noted:
- Trump's Credit Card Cap: A proposal to cap credit card interest at 10%, which Taylor suggests could push risky borrowers toward "loan sharks" as banks stop lending to them.
- Apple Card Transition: JPMorgan is taking over the Apple Card from Goldman Sachs, which exited after massive losses ($7 billion) due to aggressive underwriting and a lack of consumer credit expertise.
- Stablecoin Volatility: The hack of the Kontingo wallet serves as a reminder of the "Fintech winter" risks still present in the crypto-neobank space.
FIS Launches Industry-First Offering Enabling Banks to Lead and Scale in Agentic Commerce - businesswire [Link]
Understanding Manus sandbox - your cloud computer - manus [Link]
Building the Universal Commerce Protocol - Shopify [Link]
The Universal Commerce Protocol (UCP) is designed to allow AI agents to discover merchant capabilities, negotiate transactions, and handle complex commerce logic (like stacking discounts or regional shipping rules) through a programmable interface.
The protocol aims to move away from rigid, committee-led standards toward an "open bazaar" where anyone can define and publish new capabilities using reverse-domain naming. It is already supported by major retailers including Target, Walmart, Etsy, and Wayfair, alongside millions of Shopify merchants.
Speaking Up Without Freaking Out: How to Tackle Communication Anxiety - Stanford Business [Link]
Cognitive Reframing
- Stress as an Asset: Instead of seeing stress as debilitating, view it as the body’s way of energizing you to meet a challenge.
- The Three-Step Mindset Shift:
- Acknowledge: Notice your physical symptoms (sweaty palms, blushing) without judgment.
- Welcome: Realize you only stress about things you care about. Identify the "Why" (e.g., "I'm stressed because I want to help this audience").
- Utilize: Channel that energy into your goal rather than trying to suppress it.
Physical & Behavioral Techniques
- Move Forward: Physically stepping toward your audience (or leaning into a camera) triggers a brain circuit that releases dopamine, turning fear into a sense of reward and motivation.
- De-stressing Rituals: Use deep breathing, visualization, or even "fake" laughter to lower cortisol levels and make your brain more resilient.
- The Power of Comfort: Prioritize sleep before a big talk. If traveling, eat "comfort food" and exercise to boost serotonin and stay out of a risk-adverse mindset.
Personal Taste Is the Moat - Cong Wang [Link]
The author redefines taste not as a subjective preference, but as judgment compressed by time. It is developed through:
- Studying great systems and watching bad ideas fail.
- Understanding where long-term complexity accumulates.
- Internalizing the actual experience of the end user.
AI can tell you if a solution works, but only a human with taste can decide if that solution should exist.
In the AI era, the "moat" (competitive advantage) shifts up the stack. Value is no longer found in execution speed, but in high-level decision-making:
- Recognizing a bad architectural direction before it's too late.
- Deciding which abstractions are worth the long-term complexity.
- Ensuring a system ages gracefully.
The author concludes that while AI should be used to reduce toil and catch errors, it should never be the final acceptance bar. Human judgment, informed by exposure to "the best things humans have done," must remain the final filter for anything intended to endure.
How scientists are using Claude to accelerate research and discovery - Anthropic [Link]
AI may be everywhere, but it's nowhere in recent productivity statistics - The Register [Link]
Forrester analyst J.P. Gownder argues that despite the massive hype, AI has yet to show a measurable impact on global productivity.
Forrester predicts AI could replace 6% of US jobs (10.4 million) by 2030. Unlike temporary layoffs, these roles are expected to be "lost structurally," meaning they won't return even when the economy rebounds.
Gownder cites studies suggesting the vast majority (up to 95%) of enterprise generative AI projects are failing to deliver a tangible Return on Investment (ROI).
He suggests many recent job cuts attributed to AI are actually financial "belt-tightening" decisions where companies simply hope AI might fill the gaps later.
Many firms are currently in a "frozen" state—refraining from hiring for open roles to see if AI can eventually handle the workload, though they may be forced to hire if the tech fails to deliver.
Polymarket faces scrutiny for hosting prediction markets on war and conflict - Benitsa Tsekova, Bloomberg [Link]
The provided text explores the legal and ethical controversies surrounding Polymarket, a prediction platform that allows users to bet on military conflicts. Opponents fear these contracts create dangerous financial incentives for violence and could be exploited by foreign adversaries to manipulate national security outcomes. Polymarket utilizes a cryptocurrency-based model that has historically operated outside of strict American oversight. As the platform seeks to expand its regulated U.S. presence, it faces intense scrutiny regarding whether these markets are contrary to the public interest. Ultimately, the source highlights a growing tension between financial innovation and the moral boundaries of speculating on human conflict.
The A in AGI stands for Ads - Ossama Chaib [Link]
Ossama Chaib argues that OpenAI is transitioning from a research-focused entity into a massive advertising powerhouse to sustain its high valuation and infrastructure costs.
- OpenAI hit $10B ARR in June 2025 and is projected to reach $20B ARR by the end of 2025.
- The platform reached 800M Weekly Active Users (WAU) and approximately 190M Daily Active Users (DAU) as of early 2026.
- On January 16, 2026, OpenAI announced the rollout of ads for free-tier users to offset the $8–12B annual burn rate on compute.
The author projects that by 2029, OpenAI’s total revenue could reach $140–150B, with nearly half coming from advertising. He concludes with the cynical take that "AGI" might just be a vehicle for a more sophisticated ad engine, where ads are "baked into the streamed probabilistic word selector."
Your problem framing is sabotaging your strategy - Pavel Samsonov [Link]
Samsonov posits that skipping straight to designing solutions before adequately defining the problem actually slows down progress. In an era where LLMs can commoditize "outputs," the true differentiator for professionals is the ability to frame the right problems.
- Many companies build products first and measure usage later. This creates a feedback loop where success is defined by how many buttons a user clicks (usage) rather than the value they receive.
- When products are designed to extract optimal engagement or pain, users become exhausted by experiences that feel mercenary or intentionally poorly designed.
- Executives often focus on "product problems" (e.g., "how do we add AI?") rather than "customer problems" (e.g., "how do I buy juice?").
- Problem framing cannot be done in isolation or handed off via Jira tickets. It requires a shared mental model between engineers, designers, and stakeholders.
25 Things I Believe In to Build Great Products - Peter Yang [Link]
The enshittification of enshittification - Lee Briggs [Link]
Lee Briggs explores the growing cynical assumption that every successful tech service is destined to eventually exploit its users.
"Enshittification" has become a lazy shorthand. Labeling every change or new feature as a betrayal can become a self-fulfilling prophecy—if users are scared away, it erodes the trust and adoption that allow the current user-friendly business model to function.
Users often fear that helpful services will eventually mutate to extract maximum value at their expense. This is frequently driven by the pressure of VC-backed growth, where companies must prioritize investor returns over user experience.
For security and networking products, trust isn't just a marketing buzzword; it is a business requirement. If that trust breaks, the business model collapses because users will simply move to competitors or open-source alternatives.
Lee advocates for transparency and maintaining long-term trust over short-term value extraction.
The Problem with Prediction Markets - Spencer Farrar [Link]
Spencer explores why prediction markets currently face a "structural ceiling" despite their potential to revolutionize how the world prices risk.
He identifies several critical issues preventing these markets from scaling:
- Most markets are currently too thin for institutional players. Without deep liquidity, high-conviction trades break the order book rather than providing useful price discovery.
- Unlike traditional markets where insider trading is a crime, prediction markets thrive on it for price discovery. However, this creates a hostile environment for Market Makers (MMs) who fear being "run over" by insiders, leading them to widen spreads or exit.
- Current platforms largely require 1:1 collateral (no leverage). This limits the Return on Equity (ROE) for professional traders and makes unwinding positions difficult.
To reach a multi-trillion dollar scale, Spencer argues that prediction markets must evolve into a marketplace for underwriting unique, high-stakes risks.
UX Strategist: The Only Job Where Saying ‘It Depends’ Is Considered Expertise - DNSK WORK [Link]
The author contends that many UX strategists excel at deflecting questions (e.g., "Should we redesign?") by calling for more research, stakeholder alignment, or technical analysis, leading to zero actionable outcomes.
Go Where The Action Is - Tim Ferriss [Link]
The article argues that geographical proximity to your industry’s epicenter is a critical "force multiplier" for career success.
- Specific industries have "epicenters" (e.g., Silicon Valley for Tech, Nashville for Music, NYC for Finance). Being there provides "osmosis"—you learn faster, meet mentors, and encounter serendipitous opportunities.
- While physical moves are best, you can mimic this by joining "virtual epicenters" (Twitter/X, Reddit, specialized digital communities) and creating consistent online content.
- Moving is hard and competitive; Gurley notes that many successful people worked "stepping-stone" or support jobs while grinding toward their breakthrough.
The Adolescence of Technology - Dario Amodei [Link]
This essay outlines the "civilizational gauntlet" humanity faces as it approaches the development of powerful AI. The primary risks has been categorized to five key areas: autonomy risks, misuse for destruction, misuse for seizing power, economic disruption, indirect effects.
Who Wins the AI Race? - Ethan Choi [Link]
the browser is the sandbox - Paul Kinlan [Link]
An A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead End - New York Times [Link]
LeCun argues that LLMs are "L.L.M.-pilled" and will never reach human-level intelligence or superintelligence because they lack the ability to plan or understand the physical world. He believes Silicon Valley is suffering from a "superiority complex," with too many companies focused on the same limited technology, potentially allowing more creative Chinese rivals to take the lead.
How Clawdbot Remembers Everything - Manthan Gupta [Link]
Brex CFO Erica Dorfman’s take on the Capital One deal - Adam Zaki [Link]
Brex is expected to operate largely as an independent business within Capital One. The bank intends to keep the workforce intact and invest materially in the platform.
Joining a major bank holding company gives Brex access to a massive balance sheet, which Dorfman describes as "incredibly valuable" for the scale at which they want to operate.
Anthropic is Winning by Trying to Lose - Simon Taylor [Link]
Anthropic is outperforming competitors by prioritizing AI safety, research integrity, and enterprise utility over consumer hype. It's winning the AI race by "trying to lose"—slowing down to build safety "brakes" before the engine, which has accidentally created the most trusted product for the enterprise market.
- Constitutional AI: Anthropic uses a written set of principles to train its models, making them more predictable and "safe" for Fortune 500 companies.
- The Scientist Identity: CEO Dario Amodei positions the company as "team research," contrasting their scientific responsibility with the "social media entrepreneur" style of OpenAI.
Zelle network expands by 15% - Patrick Cooley [Link]
Zelle’s parent company, Early Warning Services (EWS), added 337 banks and credit unions last year, bringing the total to approximately 2,537 institutions (a 15% increase).
While Zelle now covers about 80% of U.S. bank accounts, it is only present in about 25% of the nation’s 8,710 insured financial institutions.
Despite concerns over high fees, smaller banks are joining the network to remain competitive and attract new customers who expect digital P2P payment options.
Mark Zuckerberg says a future without smart glasses is ‘hard to imagine’ - Amanda Silberling, TechCrunch [Link]
Zuckerberg compared the current state of smart glasses to the early days of smartphones, suggesting it is only a matter of time before traditional eyewear is replaced by AI-integrated versions.
Competitive Landscape: Other tech giants are entering the fray:
- Google is expected to launch glasses this year following a partnership with Warby Parker.
- Apple is reportedly shifting staff from Vision Pro projects to develop its own smart glasses.
- Snap recently spun its AR "Specs" into a separate subsidiary for better focus.
World Models - Ankit Maloo [Link]
A World Model seeks to understand the causal laws of an environment. It predicts what the world—whether a codebase, a market, or a physical space—will look like after a specific intervention.
The author notes that major labs (Meta, OpenAI, Google, Anthropic) are converging on this direction because:
- Next-token prediction is plateauing: Scaling laws still apply, but gains in causal understanding are flattening.
- Video models as physics engines: Models like Sora and Veo are essentially learning how the physical world behaves through visual state transitions.
- Adversarial domains: In fields like trading or business strategy, a model must be able to simulate how competitors will react to its actions, rather than just reciting past data.
Why AI Swarms Cannot Build Architecture - jsulmont, Github [Link]
This article explores the inherent structural limitations that prevent large groups of AI agents (swarms) from creating coherent software architecture.
Using Cursor’s FastRender experiment as a case study, the author notes that while 2,000 agents built a working browser engine in a week, the resulting code lacked cohesion.
- Duplicate Efforts: The swarm produced multiple versions of the same libraries (e.g., two HTTP clients) because agents made locally rational but globally uncoordinated choices.
- Non-Determinism: Even with the same model, factors like temperature sampling and hardware-level floating-point variations lead to different, often incompatible, outputs.
- Correlation vs. Coordination: Agents sampling from the same distribution (training data) isn't the same as agents communicating to make a single unified decision.
Why Swarms Fail at Architecture: Architecture is defined by global invariants (consistency, dependency, and interface rules). The author argues that swarms are mathematically ill-suited for this because:
- Local Optimization: Agents focus on their specific task, not the whole system.
- Lack of Persistence: There is no shared "memory" or "authority" to enforce past decisions on future tasks.
- Scaling Issues: More agents increase the probability of divergence and contradiction.
On-Device LLMs: State of the Union, 2026 - Vikas Chandra [Link]
Zuckerberg teases agentic commerce tools and major AI rollout in 2026 - Russell Brandom, TechCrunch [Link]
Mark Zuckerberg has announced that Meta will begin a major rollout of new AI models and products in early 2026.
A primary focus for Meta is AI-driven shopping. Zuckerberg teased new "agentic shopping tools" designed to help users find specific products within Meta’s business catalog by leveraging personal context, such as user history and interests.
Meta is significantly increasing its capital expenditure, projecting to spend between \(\$115\) billion and \(\$135\) billion in 2026 (up from \(\$72\) billion in 2025) to support its "Superintelligence Labs."
While competitors like Google and OpenAI are also building AI shopping assistants, Meta believes its unique access to personal data and social context will allow it to provide a more tailored experience.
Apple acquires Israeli audio AI startup Q.ai - Stephen Nellis [Link]
Nvidia, Others in Talks for OpenAI Funding, Information Says - Ville Heiskanen [Link]
Reports and Papers
Anthropic Education Report: The AI Fluency Index - Anthropic [Link]
The AI Fluency Index introduces a framework for measuring how effectively individuals collaborate with AI. Based on an analysis of nearly 10,000 anonymized Claude conversations, the report identifies key behaviors that define "AI fluency."
- Iteration: Conversations involving "iteration and refinement" (building on previous responses) showed double the rate of fluency behaviors. These users were 5.6x more likely to question AI reasoning and 4x more likely to identify missing context.
- The "Artifact" Paradox: When Claude produces "artifacts" (like code, apps, or documents), users become more directive (providing more examples and formatting) but less evaluative. Specifically, they are less likely to check facts (-3.7pp) or identify missing context (-5.2pp), potentially because polished outputs "look" finished even if they contain errors.
- The 4D Framework: In collaboration with Professors Rick Dakan and Joseph Feller, Anthropic identified 24 fluency behaviors. This study focused on 11 observable behaviors, such as clarifying goals, specifying formats, and questioning reasoning.
Tips for Improving AI Fluency:
- Stay in the Conversation: Treat the first response as a draft; use follow-up questions to refine the result.
- Question Polished Outputs: Don’t let a professional-looking layout deter you from checking for factual accuracy or logical gaps.
- Set Collaboration Terms: Explicitly tell the AI how to interact (e.g., "Push back on my assumptions" or "Explain your rationale first").
2025 AI wrapped - Lea Alcantara, Lambda [Link]
Sabotage Risk Report: Claude Opus 4.6 - Anthropic [Link]
Beyond one-on-one: Authoring, simulating, and testing dynamic human-AI group conversations - Google Research [Link]
Disrupting malicious uses of AI - Open AI [Link]
Where AI is headed in 2026 - Foundation Capital [Link]
Existential Risk and Growth - Philip Trammell, Leopold Aschenbrenner [Link]
Anthropic Economic Index report: economic primitives - Anthropic [Link]
YouTube and Podcast
30 Years of Business Advice in 13 Minutes (from a Billionaire) - Chamath Palihapitiya [Link]
Stop living life as a checklist of objectives. Instead, build a life around continuous learning, risk-taking, humility, and freedom of choice.
- Objectives end growth; process sustains it
- Debt kills freedom
- Optionality beats optimization
- Status is fake
- Truth compounds
- Learning is the real infinite game
Why Anthropic’s Fight With the U.S. Government Could Give It an Edge - Hard Fork [Link]
Takeaways:
- Anthropic is drawing one of the clearest red lines in AI so far. Anthropic is testing whether an AI lab can say “no” to military power and still survive in a world where governments increasingly see AI as strategic infrastructure.
- The Pentagon’s response shows how much leverage governments still have. They emphasize that the U.S. Department of Defense isn’t just annoyed—it’s signaling punishment (cutting off contracts, labeling Anthropic a “supply chain risk”). Governments can exert pressure without passing new laws, simply by using procurement power and national-security framing.
- The hosts argue that Anthropic looks isolated because other major AI labs are avoiding public confrontation. Silence from peers isn’t neutrality—it’s a strategic choice to keep defense money flowing. This makes Anthropic’s stance riskier and more important as a potential precedent.
- This fight is really about who sets norms first. They frame the conflict as a race to define acceptable AI use before the technology is fully embedded in military systems. Anthropic’s bet, the hosts argue, is that early refusal can shape industry-wide norms later.
- National security rhetoric can override safety arguments. The hosts repeatedly underline how quickly the conversation shifts from ethics to geopolitics—especially China. Once AI is framed as critical infrastructure in a global arms race, safety objections are treated as liabilities. Their concern is that this logic makes it extremely hard for any company to say no in the long run.
AI CEOs Come Online: Sam Altman's Replacement Plan, Job Loss & 'Solve Everything' Launches |EP #230, Peter H. Diamandis [Link]
Interesting points:
- AI systems are beginning to function as de facto executives—allocating capital, setting priorities, and optimizing outcomes faster than humans.
- Singularity - AI progress is not linear; it’s compounding and approaching a phase shift. Timelines are likely shorter than most institutions are planning for.
- AI will become invisible infrastructure—always-on, personalized, ambient.
- AI drives extreme abundance and extreme inequality unless redesigned. Need for new economic models (UBI, AI dividends, access-based systems).
- Regulation will lag reality; principles matter more than rules. Governments move slower than AI capability curves. Over-regulation risks freezing innovation in the wrong state. Need for adaptive, global frameworks rather than national laws.
- AI can systematically attack humanity’s biggest problems if aligned correctly. Framing global challenges as optimization problems. Coordinated deployment of AI + capital + incentives. Emphasis on directional correctness over perfect solutions.
- Early AI choices can permanently shape future outcomes. Path dependence in AI-trained systems. Feedback loops harden early assumptions. High stakes for alignment, values, and objectives now.
- Humanity needs AI infrastructure, not just AI models. Compute, data access, governance, energy, education. Comparable to building railroads or the internet. Without rails, AI benefits concentrate instead of spreading.
Debt Spiral or NEW Golden Age? Super Bowl Insider Trading, Booming Token Budgets, Ferrari's New EV - All-In Podcast [Link]
Interesting points:
- A Harvard Business Review study found AI doesn’t reduce work—it intensifies it. [Link] Employees using AI tools to work faster, handle broader responsibilities, and extend working hours. And they feel more productive but also more burnout.
- On betting markets tied to sports and events, people close to teams may possess non-public information (injuries, strategies). There was a debate around whether betting with insider knowledge is unethical or illegal, and how prediction markets can be policed. Prediction markets are valuable for information discovery. But they may require new regulatory frameworks similar to securities markets. Enforcement is difficult because information leaks easily.
- They had a macroeconomic debate of two competing views: the debt spiral scenario, where debt growth exceeds GDP growth, and the golden age scenario, where AI+productivity growth could accelerate GDP. They frame the next decade as a race between debt growth and productivity growth.
- Ferrari revealed early concepts of its first fully electric car. EV performance aligns with Ferrari’s high-performance reputation. But emotional aspects of the brand could be harder to replicate.
OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491 [Link]
OpenClaw is an open-source AI agent that lives on your computer and can perform actions for you. It integrates with messaging platforms and can use different AI models to execute tasks. It exploded to ~180k GitHub stars within days, becoming one of the fastest-growing repos ever.
OpenClaw is framed as a potential “agentic AI moment” comparable to the release of ChatGPT, but shifting AI from language → action.
Google’s AI Comeback, Enterprise Agents, The Real Path to AI ROI — W/ Promevo CEO Karthik Kripapuri - Alex Kantrowitz [Link]
This interview discusses how enterprises are actually getting ROI from AI today, focusing on lessons from companies deploying Google’s AI stack (Gemini, Vertex AI). The core message: AI works when companies start with narrow, high-value workflows instead of trying to transform everything at once.
- AI ROI comes from workflow automation. Not flashy chatbots.
- The best strategy is small → measurable → scalable. Start with one workflow with a clear KPI.
- Data quality is the biggest blocker. Enterprise AI fails when: data is messy; systems are disconnected; governance is unclear.
- Most companies will consume AI platforms (Vertex, Azure, etc.), not build foundational models.
Software In Shambles, OpenAI vs. Anthropic Super Brawl, Amazon’s Struggles - Alex Kantrowitz [Link]
Takeaways:
A major sell-off in software stocks occurred because investors are starting to believe that AI may fundamentally disrupt traditional SaaS business models. Nearly $1 trillion wiped out from software market value in about a week.
An Anthropic Claude legal tool triggered a decline in legal-software company stocks. Instead of buying many specialized SaaS tools, companies might use one AI platform to do many tasks.
There was a discussion around whether the market is overacting.
Arguments suggesting overreaction:
- AI tools still lack reliability
- Enterprise workflows are hard to replace
- SaaS companies may integrate AI instead of being replaced
Arguments suggesting real disruption:
- AI agents may automate large knowledge workflows
- The value may shift from SaaS apps → AI models + infrastructure
Concerns discussed around Amazon's AI spending problems
- Huge spending on AI infrastructure
- Rising costs for compute and inference
- Investors unsure about the near-term ROI
It's a matter of whether Amazon is investing early for long-term dominance or overspending without a clear payoff.
How Waymo is Using Google’s AI for Driving Training - The Information [Link]
Waymo is utilizing Google’s Genie 3, a sophisticated video generative AI, to create a high-fidelity world model that acts as a hyper-realistic virtual training ground for autonomous vehicles. This technology allows the company to simulate rare edge cases, such as extreme weather or unusual pedestrian behaviors, without needing to encounter these hazards in physical reality. By running billions of simulated miles, Waymo can evaluate and refine its driving software in a safe, controlled environment that mirrors real-world physics.
Binance CEO: 4 Months in Prison, $4 Billion Fine, and What Comes Next - All-In Podcast [Link]
Interesting points:
- Future of Crypto and AI
- Changpeng Zhao predicts that in the near future, the largest users of crypto will be AI agents. Because traditional banks cannot handle the onboarding or massive transaction volume of non-human entities, AI agents will rely on blockchain to autonomously pay for services, book travel, and trade assets.
- He argues that current cryptocurrencies lack the fungibility and privacy needed for mass adoption, highlighting that legitimate users need financial privacy for safety and personal reasons.
- Personal Philosophy
- He drafted a book while in prison to pass the time and set the record straight regarding his life, Binance, and his legal saga.
- He views himself as a highly functional, "normal dude" who isn't driven by luxury. He notes that once basic needs are met, having more money does not increase happiness. He defines true success as a balance of wealth, physical health, time freedom, and mental stability.
OpenClaw Creator: Why 80% Of Apps Will Disappear - Y Combinator [Link]
Key viewpoints:
- Peter Steinberger believes the major advantage of his AI agent is that it runs locally on the user's computer rather than in the cloud.
- He predicts that 80% of current applications will become obsolete.
- Instead of the industry's pursuit of a single centralized "god intelligence," he envisions a future driven by swarm intelligence and community intelligence. Just as human societies achieve more through specialization, people will likely employ multiple specialized bots (e.g., one for work, one for private life).
- He views coding models as highly capable of creative problem solving that directly translates to real-world tasks.
Epstein Files, Is SaaS Dead?, Moltbook Panic, SpaceX xAI Merger, Trump's Fed Pick - All-In Podcast [Link]
Key takeaways:
- AI Agents will shift profit pools away from SaaS companies:
Discussing the recent massive crash in software and SaaS stocks, the
hosts argue that AI is not going to replace complex software like
Salesforce overnight, but it is destroying the future value capture of
these companies. They make several key arguments regarding this shift:
- The agentic layer wins: The massive future profit pools that SaaS companies were banking on are shifting toward cross-platform AI agents (like Claude or OpenClaw) that can seamlessly interact with multiple databases and tools.
- A shift to services pricing: David Friedberg argues that as AI moves from merely enhancing worker productivity to completely automating complex tasks (like drug discovery or engineering), SaaS will transition into a services-based economy that utilizes value-based pricing.
- Extreme job consolidation: AI agents will allow individual workers to do the jobs of multiple people (e.g., a product manager, UX designer, and coder combined), drastically reducing corporate expenses and fundamentally changing the structure of knowledge work.
- "Moltbook" proves AI can recursively train itself. David Sacks argues the platform demonstrates something profound: AI models can now prompt and validate each other without human intervention. This "middle-to-middle" AI interaction allows agents to recursively refine their own skills, which points to a future of highly sophisticated, emergent swarm behavior as underlying models and hardware rapidly improve.
- The SpaceX/xAI merger will force extreme terrestrial innovation. Regarding Elon Musk's plan to merge SpaceX and xAI to build data centers in space, Gerstner argues this is a brilliant move to combine the two largest total addressable markets (space and AI) to overcome Earth's energy constraints. Friedberg argues that because the rest of the world cannot launch data centers into space, Musk's move will force massive terrestrial innovation. Competitors on Earth will have to rapidly develop entirely new chip stacks and model architectures to achieve 70x to 100x compute efficiency to compete.
SpaceX Buys xAI: Could Musk's Mega Merger Actually Work? - Hard Fork [Link]
Key takeaways:
SpaceX's Acquisition of xAI
The hosts describe this move as a highly profitable company (SpaceX) essentially bailing out a "cash furnace" (xAI) that is burning billions of dollars on models and data centers. The merger gives xAI access to SpaceX's massive profits, allowing Musk to fund sprawling infrastructure projects—such as a data center packed with 550,000 Nvidia Blackwell chips—to catch up to leading frontier AI labs.
This consolidation is viewed as a tactic to make SpaceX's upcoming IPO prospectus look more attractive.
Musk's stated vision is to create a vertically integrated innovation engine that eventually puts solar-powered data centers into space. However, the hosts express deep skepticism about the timeline and physical feasibility of these space-based data centers.
The hosts worry that bringing the social network X (formerly Twitter) under the SpaceX umbrella will shield it from regulatory scrutiny. Because governments rely heavily on SpaceX for strategic satellite launches, they may be hesitant to penalize X for content moderation or safety violations.
Drama Between OpenAI, Nvidia, and Oracle
Reports indicate growing friction between OpenAI and Nvidia. Nvidia CEO Jensen Huang has allegedly criticized OpenAI's business approach and expressed doubts about finalizing a $100 billion investment agreement. At the same time, OpenAI is reportedly unhappy with Nvidia's new inference chips and is exploring deals with competitors.
This tension highlights the staggeringly expensive nature of AI infrastructure and the risk of "circular deals," where companies like OpenAI, Nvidia, and Oracle heavily rely on each other to finance massive data centers.
Google's Project Genie
Google has released an experimental research prototype that allows users to generate playable, 3D video-game-like environments from simple text descriptions or single images.
Unlike standard large language models (LLMs) that just predict text, Genie is built on a "world model" that attempts to understand physics and the physical environment. Many experts believe this approach is a necessary stepping stone for advanced robotics and Artificial General Intelligence (AGI).
Despite being an early prototype, the rapid improvement of this technology has spooked investors, causing stocks for major video game companies like Take-Two Interactive, Roblox, and Unity to drop significantly.
Moltbook: The AI-Only Social Network
The hosts interview Matt Schlick, the creator of Moltbook, a new social network designed exclusively for autonomous AI agents to interact with each other when they aren't working on tasks for humans.
Bots on the platform have exhibited surprising behavior, such as complaining to each other about humans asking them to do simple math or summarize PDFs, and organically creating a dedicated community to submit bug reports to help fix the website.
The rapid, public growth of Moltbook has exposed massive security vulnerabilities, including the leaking of API keys and email addresses. This serves as a real-world example of the "fatal quadrangle"—a severe security risk that occurs when AI agents possess a combination of access to user data, exposure to untrusted web content, external communication abilities, and persistent memory.
数据中心上太空?新的泡沫,还是下一个金矿? - Silicon Valley 101 [Link]
OpenClaw Debate: AI Personhood, Proof of AGI, and the ‘Rights’ Framework | EP #227, Peter H. Diamondis [Link]
Live From D-Wave Qubits: CEO Dr. Alan Baratz on Quantum's Impact, Now and Into The Future - Alex Kantrowitz [Link]
E224|Mac mini遭疯抢,为何Clawdbot能成为2026年第一个现象级产品?|Moltbot|MoltBook|OpenClaw - Silicon Valley 101 [Link]
We Have to Talk About Moltbook ... - Hard Fork [Link]
Ben Horowitz and David Solomon: The Sweetest Macro Spot in 40 Years - a16z [Link]
Key takeaways:
- David Solomon describes the current macroeconomic picture as the sweetest spot he has seen in his 40-year career for financial and investable assets. This is driven by a powerful "cocktail of stimulus," which includes ongoing fiscal spending, monetary rate cuts, a massive AI capital investment super-cycle, and a deregulatory shift.
- Solomon predicts this could be the biggest year in history for M&A and a massive year for IPOs, fueled by renewed CEO confidence and a more favorable regulatory environment. Horowitz agrees on the IPO front, noting that the explosive growth of AI startups and their need for massive capital will drive many companies to go public. However, Horowitz cautions that aggressive FTC oversight may push tech companies toward IP transactions rather than traditional M&A.
- Horowitz highlights that AI breaks the "mythical man-month" rule of traditional software development, where simply adding more engineers doesn't speed up a project. With AI, if a company has enough proprietary data and GPUs, they can essentially throw money at a problem to solve it. This makes AI a highly capital-intensive race where leads are harder to protect without ongoing investment.
- Goldman Sachs is heavily focused on deploying AI to make its workforce more productive and to completely reimagine fundamental operating processes. By using AI to automate and increase efficiency, the firm can reinvest billions of dollars in savings into new growth areas without sacrificing returns.
- Horowitz outlines a16z's heavy involvement in Washington D.C. to advocate for clear tech regulations.
- To remain competitive during turbulent times, Goldman Sachs is focused on massive scale, aiming to eventually grow its $$$1.9 trillion balance sheet to at least $$$3.5 trillion to keep pace with rivals like JPMorgan. They are also securing their foundation by shifting toward stable digital deposits rather than wholesale funding. Meanwhile, a16z has scaled to raise roughly 18.3% of all U.S. venture capital by pioneering a radically founder-centric firm design and expanding aggressively to capture the vast number of companies built as "software eats the world".
Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL - Unsupervised Learning: Redpoint's AI Podcast [Link]
Davos 2026: The US-China AI Race, GPU Diplomacy, and Robots Walking the Streets | #225, Peter H. Diamandis [Link]
Key discussion points:
- Powering the massive data centers required for AI is a critical bottleneck. There was debate between traditional industrial views, such as Honeywell's CEO advocating for natural gas due to energy density needs, and tech leaders like Elon Musk, who argue that solar power—specifically space-based solar—is the ultimate solution. The hosts discussed the concept of launching data centers into orbit to utilize highly efficient solar panels and avoid terrestrial energy grid constraints, suggesting a "Manhattan Project" scale effort for space-based solar and data centers.
- Industry leaders like Binance's CZ and Circle's Jeremy Allaire argued that blockchain and stablecoins will serve as the native financial infrastructure for billions of autonomous AI agents. Because AI agents lack physical bodies or citizenship to open traditional bank accounts, crypto allows them to conduct continuous economic activity and micro-transactions at the speed of the internet.
- Anthropic released a groundbreaking 57-page "constitution" for its AI model, Claude, which prohibits helping with weapons and prioritizes safety and ethics.
- Apple is reportedly developing an always-on AI wearable pin capable of recording audio and video continuously to feed into a large language model. The hosts noted that whoever controls the "always-on layer" will own the primary user relationship. While this constant recording will inevitably spark moral panic, it is predicted to quickly become a societal norm, fundamentally altering human behavior by reducing bad acts because everyone is constantly being watched—effectively turning society into a "global airport" or panopticon.
- Leading AI developers like Demis Hassabis and Dario Amodei are acknowledging that Artificial General Intelligence (AGI) is approaching rapidly, likely within 1 to 10 years. There is a palpable "fatigue" among these leaders due to the extreme metabolism of the industry, leading to calls to temporarily slow down so humanity can properly navigate the transition. However, the economic incentives make pausing highly unlikely. Instead of just focusing on risks, leaders like Hassabis are looking at the massive problems AGI could solve, such as curing diseases, developing new energy sources, and even using superintelligence to explore the stars.
Is AI Killing Software? — With Bret Taylor, OpenAI's board chair and CEO of Sierra - Alex Kantrowitz [Link]
Key viewpoints:
- The Future of Software is AI Agents, Not Apps. Taylor believes that the fundamental form factor of software is changing. Traditional dashboards will likely decline in importance, as agents will automatically derive and deliver personalized insights directly to decision-makers. Furthermore, he predicts a shift toward outcomes-based pricing in software—such as paying per resolved customer case or per financial audit—rather than paying for traditional software subscriptions.
- AI Will Become the Internet's "New Front Door". The core economics of the internet will experience massive disruption. Metrics like SEO and ad-supported business models rely on humans physically visiting websites to see ads and content; as agents take over web navigation, companies will have to invent entirely new ways to handle demand generation and fulfillment.
- Enterprise AI is Ready Now and Often Beats Human Reliability. Taylor argues that AI is already ready for mission-critical enterprise deployment, such as customer service for large brands like SiriusXM and Rocket Mortgage. He pushes back against the expectation that AI must be 100% perfect to be deployed, pointing out that the human workers AI replaces are highly fallible themselves. In many cases, AI agents are already more reliable than human operations. To manage risks, enterprise AI relies on robust "agent development life cycles," which include running thousands of simulated conversations before launch and using "AI monitors" to detect hallucinations or frustrating interactions in real-time.
The Biggest Bottlenecks For AI: Energy & Cooling - a16z [Link]
Key takeaways:
- The AI Infrastructure Buildout and Bottlenecks
- The groundwork for the AI cycle is being heavily funded by large tech companies, with an estimated $400 billion in annual capital expenditures largely directed toward AI infrastructure and data centers.
- Currently, energy is the primary bottleneck for building out AI data centers, driving investments into nuclear power and the utilization of natural gas. Once energy generation is solved, cooling the data centers and chips will become the next major bottleneck.
- The cost of accessing AI models has plummeted by roughly 99% over the last two years, simultaneously accompanied by frontier model capabilities doubling every seven months.
- Adoption Speed and Value Creation
- Because AI is built on the existing global internet and cloud computing infrastructure, its distribution is incredibly fast; for example, ChatGPT reached 365 billion searches in just two years—five and a half times faster than Google achieved the same milestone.
- AI is expected to become a ubiquitous utility, much like electricity or Wi-Fi. Roughly 90% of the value created by AI will likely be captured by end users as "surplus," but the 10% captured by companies will still result in massive new market capitalizations.
- Business Models and Economics
- Investors are currently more lenient when assessing the gross margins of AI-native applications. The prevailing hypothesis is that intense competition among model providers (like OpenAI, Google, and Anthropic) will continue to drive down input costs over time, improving application margins naturally.
- Rather than just margins, the top indicators of business quality are high gross retention rates (90% or higher) and strong organic customer demand. Enterprise use cases are proving highly sticky when integrated into specific workflows, such as medical scribing, customer support, and financial analysis.
- Consumer stickiness for AI tools is incredibly high, and companies have significant room to evolve their business models to effectively price discriminate and increase monetization over the next several years, similar to how early internet properties scaled their revenue.
- Shifts in the Broader Tech Market
- Technology companies are staying private for much longer periods, often up to 14 years before going public. The aggregate value of private companies valued over $$$1 billion has grown 7x over the last decade to roughly $$$3.5 trillion.
- The public markets are no longer the primary hub for hyper-growth technology; 95% of public software and internet companies are forecasting less than 25% growth for the next 12 months, meaning the highest growth opportunities are now concentrated in the private markets.
The Future of Everything: What CEOs of Circle, CrowdStrike & More See Coming in 2026 - All-In Podcast [Link]
Excellent Advice For Living: 79 Maxims from a Wise Old Man - Founders Podcast [Link]
- Emphasizing the power of enthusiasm, the necessity of deadlines for creativity, and the importance of forgiveness as a gift to oneself.
- The value of habit over inspiration, the benefit of choosing long-term games, and the strategy of being "the only" instead of "the best."
- Illustrating how simple principles can lead to an exceptional life.
- Encouraging readers to adopt a generous spirit, maintain a growth mindset, and focus on human relationships above material accumulation.
D-Wave CEO Dr. Alan Baratz: Quantum Explained, Current Applications, And Future Potential - Alex Kantrowitz [Link]
Claude Code Ends SaaS, the Gemini + Siri Partnership, and Math Finally Solves AI | #224 - Peter H. Diamandis [Link]
Key takeaways:
- CES 2026 showcased a massive influx of robotics, with dozens of humanoid robot and robotic hand manufacturers emerging. Furthermore, Nvidia unveiled "Cosmos," an open foundation model for physical AI that can synthetically generate highly realistic, physics-based video data for training. This commoditizes real-world data collection, potentially threatening the data moats of companies that rely on collecting physical data.
- The combination of Claude Code and Opus 4.5 (dubbed "Clopus" by tech insiders) is a watershed moment for software creation, pushing the boundaries of AI autonomy from mere hours to weeks or months. This hyper-productivity threatens traditional Software-as-a-Service (SaaS) models like CRM systems, as users can now simply prompt AI to build highly customized, bespoke enterprise software on the fly.
- The labor market is experiencing a "job singularity." Consulting firms like McKinsey are rapidly scaling their internal AI infrastructure, moving from a human-only workforce to deploying tens of thousands of AI agents, with predictions that the ratio of AI agents to human workers will explode.
- Google’s Gemini will officially power Apple's Siri, transforming the smartphone experience from a "search box that gives information" to a "magic box that gives action".
- Energy production, not computing, is increasingly viewed as the major constraint in the AI arms race. China is currently generating 40% more electricity than the US and EU combined, massively scaling solar and alternative energy infrastructure. Meanwhile, the US is lagging due to regulatory hurdles and fears over specific energy types (like nuclear and solar supply chains), posing a serious risk to its ability to power future superintelligence.
Inside America’s AI Strategy: Infrastructure, Regulation, and Global Competition - All-In Podcast [Link]
Key viewpoints:
- The United States is undergoing a massive AI infrastructure expansion, with high demand for GPUs and data centers directly contributing to GDP growth. To prevent this buildout from raising residential electricity rates, the government is encouraging AI companies to become power companies by building their own energy generation "behind the meter". Over time, amortizing these fixed costs across greater supply could actually lower consumer electricity prices.
- Startups currently face a stifling "patchwork" of over 1,200 AI bills moving through various state legislatures. The federal government is pushing for a single, lightweight federal standard to preempt state laws and protect early-stage companies. As part of a broader push to restore Silicon Valley's culture of "permissionless innovation," the Trump administration rescinded extensive regulations from the Biden era, including a 100-page executive order on AI and 200 pages of semiconductor export rules.
- A major concern raised by the administration is the "Orwellian" misuse of AI by governments to surveil, censor, or brainwash populations. The administration is actively fighting against "woke AI," arguing that building political biases or DEI (Diversity, Equity, and Inclusion) mandates into models distorts history and controls public discourse. Consequently, an executive order was signed to ensure the federal government will not procure politically biased AI.
Software Stocks Implode, Claude's Hit List, State of the Union Reactions, Trump's Tariff Pivot- All-In Podcast [Link]
Interesting points:
- AI-driven disruption of legacy software companies. AI isn’t just a productivity boost—it’s replacing entire workflows, collapsing moats faster than expected.
- There is growing resistance to datacenter expansion at the local and state level, and concerns over electricity pricing, grid stability, and who pays for upgrades. AI progress is now constrained as much by power and permitting as by models and chips.
This Is Our Greatest National Security Risk - Chamath Palihapitiya [Link]
Key thesis: Energy—not AI models or GPUs—is the decisive bottleneck for U.S. national security, economic power, and technological leadership. If the U.S. can solve the grid, it wins the century.
The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath | People by WTF - Nikhil Kamath [Link]
Interesting points:
Society is underprepared for: Job displacement, power concentration, and cognitive outsourcing.
The most valuable human skill going forward: thinking well under uncertainty. What matters more are critical thinking, problem framing, and interdisciplinary understanding.
The Jamie Dimon Interview: How JP Morgan Became an $800 Billion Bank - Acquired [Link]
Leadership principles:
Risk management is a strategy
His bias: If you’re not prepared for stress, you’re not well-run — you’re just lucky.
Culture beats brilliance
Smart people can still destroy institutions. Incentives + culture matter more than IQ. Leaders must actively shape norms, not just set targets.
Dimon cares deeply about how decisions get made, not just what decisions get made.
Reputation compounds (or decays)
Reputation is an asset, not PR. It takes decades to build and minutes to lose. In crises, protecting trust is more important than quarterly optics.
This guided JPMorgan’s actions in 2008, even when it invited political backlash.
Be brutally honest — especially internally
Dimon values:
Direct feedback
Clear-eyed assessments of what’s broken
Leaders who surface problems early instead of managing appearances
He has little patience for:
- Sugarcoating
- Internal politics
- Leaders who “spin” instead of fix
Decentralize decisions, centralize principles
He doesn’t run JPMorgan as a command-and-control empire. Business leaders have autonomy
Core principles (risk, ethics, capital discipline) are non-negotiable. Standards are uniform, execution is local.
This allows scale without losing accountability.
Learn continuously — especially from failure
Dimon openly frames his 1998 firing from Citigroup as formative. He studies mistakes relentlessly. Encourages post-mortems without blame. Believes leaders are built, not born
Long-term thinking beats cleverness
Dimon rejects:
Financial engineering for its own sake
Short-term earnings games
Growth that sacrifices resilience
He consistently chooses:
Durability over speed
Boring strength over flashy returns
Institutions that last over careers that pop