undefined - Every AI Founder Should Be Asking These Questions

Every AI Founder Should Be Asking These Questions

Jordan Fisher is the co-founder & CEO of Standard AI and now leads an AI alignment research team at Anthropic. In his talk at AI Startup School on June 17th, 2025, he frames the future of startups through questions rather than answersโ€”asking how founders should navigate a world where AGI may be just a few years away. He surfaces the big questions startups should be asking in the age of AGI: Should you even start a company right now? What happens when software becomes commoditized? How do you build trust as teams shrink and AI takes on more responsibility?

โ€ขOctober 7, 2025โ€ข40:35

Table of Contents

0:00-7:56
8:03-15:55
16:02-23:57
24:03-31:57
34:54-36:42
32:03-40:27

๐Ÿค” Why is Jordan Fisher More Confused Than Ever About AI's Future?

Personal Reflection on Uncertainty in AI

Jordan Fisher opens with a striking admission of confusion about AI's trajectory, despite his extensive technology background. This represents a fundamental shift from his previous ability to predict 5-10 year technology trends.

Key Insights:

  1. Lost Predictive Power - Previously could see 5-10 years ahead in technology, now limited to 3 weeks or less
  2. Career Strategy Impact - Has historically built companies and planned career moves around long-term trend predictions
  3. Scientific Approach to Confusion - Views confusion as the starting point for interesting discoveries and breakthroughs

The Value of Strategic Questions:

  • Critical Timing: AI's rapid pace makes this an essential moment to pause and ask fundamental questions
  • Startup Relevance: Question-asking skills are crucial for running startups, research teams, and making life decisions
  • Personal Stakes: These questions directly impact strategy, product development, and team building approaches

Timestamp: [0:00-1:57]Youtube Icon

๐Ÿš€ Should You Even Start a Startup in the Age of AI?

The Fundamental Question for Entrepreneurs

Fisher poses the most basic yet profound question facing potential founders today: whether starting a startup makes sense when AI is reshaping everything.

Core Strategic Considerations:

  1. Impact on Everything - AI affects strategy, product development, team building, and go-to-market approaches
  2. Daily Evolution - Answers to these questions change continuously as AI capabilities advance
  3. Comprehensive Planning - Need to consider hiring, fundraising, product strategy, and market approach simultaneously

The Startup Paradox:

  • Focus is Everything - Startups' competitive advantage comes from superior focus compared to large companies
  • Focus on Everything - Founders must simultaneously manage hiring, fundraising, product, strategy, and go-to-market
  • Constant Crisis Management - Product launches interrupted by team departures and unexpected challenges

Why Founders Are Well-Positioned:

  • Question-Answering Experience - Founders are accustomed to addressing every type of question constantly
  • Societal Relevance - The skills needed for startup management align with navigating AI's societal impact
  • Adaptability Training - Startup experience provides preparation for handling AI's uncertainties

Timestamp: [1:57-3:16]Youtube Icon

๐Ÿ”ฎ How Should Startups Plan for AGI Arriving in 2-3 Years?

Strategic Planning Beyond Current AI Capabilities

Fisher argues that startups should extend their planning horizon from the commonly recommended 6 months to 2 years, anticipating AGI's arrival.

Current Best Practices vs. Extended Planning:

  • Standard Advice - Plan 6 months ahead for next foundation model capabilities
  • Recommended Approach - Plan 2 years ahead assuming AGI arrival within 2-3 years
  • Strategic Balance - Take AGI seriously without creating rigid long-term plans due to extreme uncertainty

Planning Considerations:

  1. Hiring Strategy - How will AGI change talent needs and team composition?
  2. Marketing Approach - What happens to traditional marketing when AI transforms customer behavior?
  3. Go-to-Market Evolution - How will sales and distribution change with AI-powered buyers and sellers?

Founder Responsibility:

  • Job Requirement - Not considering AGI's impact across all business functions is failing as a founder
  • Uncertainty Management - Balance serious planning with flexibility for unpredictable developments
  • Comprehensive Thinking - Must consider AI's impact on every aspect of business operations

Timestamp: [3:16-4:16]Youtube Icon

โšก Why Will AI Adoption Accelerate Faster Than Expected?

The Buy-Side AI Revolution

Fisher challenges the conventional wisdom that AI adoption will be slow due to enterprise sales cycles, arguing that buyers themselves will be AI-powered.

Traditional Slow Adoption Theory:

  • Enterprise Inertia - Large companies are slow to recognize trends and make purchasing decisions
  • Sales Cycle Delays - Fortune 500 companies take years to digest and implement new SaaS products
  • Market Protection - Slow adoption creates opportunities for startups to build and sell AI-powered solutions

The Buy-Side Acceleration Factor:

  1. AI-Armed Enterprises - Buyers will have AGI and strong agents within 2 years
  2. Native AI Integration - Teams will use advanced LLMs for purchasing decisions and adoption acceleration
  3. Dual-Sided Revolution - AI transforms both product creators and buyers simultaneously

Market Dynamic Shifts:

  • Water Rising Analogy - AI benefits all players, not just startups; incumbents also gain advantages
  • In-House Development - Large enterprises building custom solutions with AI-assisted development tools
  • Sales Evolution - AI-powered outbound sales meeting AI-powered procurement creates new marketplace dynamics

Timestamp: [4:22-6:03]Youtube Icon

๐Ÿ’ป Will Software Become Completely Commoditized by AI?

The Future of SaaS and Custom Development

Fisher explores whether traditional software companies will survive when AI makes code generation trivial, presenting two contrasting scenarios.

Scenario 1: Complete Commoditization

  • Enterprise In-House Development - Companies build everything internally using AI-powered tools like Claude Code
  • Consumer App Elimination - Users generate apps on-demand instead of downloading pre-built applications
  • Product Manager Focus - Organizations need only in-house product managers to direct AI development
  • On-Demand Everything - Phones create functionality instantly based on user requests without traditional "apps"

Scenario 2: Quality Bar Elevation

  • Exceptional Standards - AI automation enables dramatically higher quality expectations
  • Team Amplification - Great teams working with AI can achieve unprecedented results
  • Vertical Differentiation - Impact varies significantly across different industries and use cases
  • Competitive Advantage - Superior AI-human collaboration becomes the new differentiator

On-Demand Code Generation:

  1. Real-Time Development - Apps generate new functionality as users encounter limitations
  2. User-Specific Features - Code created dynamically for individual user needs
  3. Trust Requirements - On-demand backend and database changes require extremely reliable AI systems

Timestamp: [6:03-7:56]Youtube Icon

๐Ÿ’Ž Summary from [0:00-7:56]

Essential Insights:

  1. Unprecedented Uncertainty - Even experienced tech leaders can no longer predict 5-10 year trends, seeing only 3 weeks ahead in AI's rapid evolution
  2. AGI Planning Imperative - Startups must plan 2 years ahead for AGI arrival, considering impacts on hiring, marketing, and go-to-market strategies
  3. Dual-Sided AI Revolution - Both product creators and buyers will be AI-powered, accelerating adoption beyond traditional enterprise sales cycle predictions

Actionable Insights:

  • Question whether starting a startup makes sense in the current AI landscape, but recognize founders' unique skills for navigating uncertainty
  • Extend planning horizons from 6 months to 2 years while maintaining flexibility for unpredictable AI developments
  • Consider two software futures: complete commoditization through AI-generated code or elevated quality bars requiring exceptional AI-human teams
  • Prepare for on-demand code generation that creates functionality in real-time based on user needs
  • Recognize that AI's "rising water" effect benefits incumbents and startups equally, changing competitive dynamics

Timestamp: [0:00-7:56]Youtube Icon

๐Ÿ“š References from [0:00-7:56]

People Mentioned:

  • Jordan Fisher - Co-founder & CEO of Standard AI, now leads AI alignment research team at Anthropic

Companies & Products:

  • Anthropic - Fisher's current employer where he runs an alignment research team
  • Y Combinator - Startup accelerator Fisher has been through
  • Standard AI - Computer vision company Fisher co-founded and led as CEO
  • Claude Code - AI-powered development tool mentioned for enterprise in-house software creation

Technologies & Tools:

  • AGI (Artificial General Intelligence) - Advanced AI systems expected to arrive within 2-3 years
  • Foundation Models - Large language models that serve as the basis for AI applications
  • LLM (Large Language Models) - AI systems that will be used by enterprise teams for decision-making

Concepts & Frameworks:

  • AI Alignment Research - Field focused on ensuring AI systems behave safely and as intended
  • Enterprise Sales Cycle - Traditional slow adoption process for Fortune 500 companies purchasing new technology
  • On-Demand Code Generation - Concept of creating software functionality in real-time based on user needs

Timestamp: [0:00-7:56]Youtube Icon

๐Ÿค– What Are the Key Questions AI Founders Should Ask About Product Strategy?

Product Development and Distribution Strategy

Trust as a Foundation:

  • Current AI Limitations: AIs are not yet trustable enough to handle critical decisions autonomously
  • Trust as Central Theme: Trust will be the determining factor in how product strategy questions are resolved
  • User Interface Evolution: Trust directly impacts how users will interact with AI-powered products

UI and Multimodal Interface Design:

  1. Generative UI Potential - While generative UI hasn't fully materialized yet, it shows promise for future development
  2. On-Demand UI Questions - Whether on-demand interfaces are optimal or if completely different approaches are needed
  3. Multimodal Integration - Combining auditory, visual, video, and text inputs seamlessly

Contextual User Experience:

  • Meeting Users Where They Are: Interface choice should depend on user context (crowded areas, privacy needs, convenience)
  • Input Method Flexibility: Users need options between voice, touch, and other interfaces based on their current situation
  • Easiest Path Forward: Focus on the most accessible interaction method for each specific use case

The Retrofit vs. Ground-Up Debate:

Existing Product Enhancement:

  • Major Player Strategy: Large companies are adding chatbots and agentic behavior to existing products
  • Distribution Advantage: Established products have built-in user bases and market presence
  • "Pixie Dust" Problem: Simply adding AI features may not create optimal solutions

AI-Native Development:

  • Startup Mentality: New technology revolutions typically require building from scratch
  • Clean Slate Benefits: Designing specifically for AI capabilities from the ground up
  • Vertical-Specific Outcomes: Success may vary significantly across different industries and use cases

Strategic Decision Framework:

  • Avoid Opinion-Based Decisions: Don't rely on assumptions about which approach is better
  • Identify Causal Mechanisms: Understand the underlying factors that determine success
  • Hypothesis Validation: Test your assumptions with real data and user feedback
  • Make-or-Break Impact: These strategic choices will determine product success or failure

Timestamp: [8:03-9:45]Youtube Icon

๐Ÿ‘ฅ How Will AI-Native Teams Differ from Traditional Companies Using AI?

Team Structure and Organizational Evolution

Team Size Evolution:

  • Default Assumption: Most people expect team sizes to shrink with AI adoption
  • Parallel to Product Strategy: Similar to retrofit vs. ground-up product decisions, teams face build-from-scratch vs. adaptation choices
  • Competitive Dynamics: AI-native teams may have advantages over large companies downsizing with AI integration

AI-Native Team Advantages:

  1. Built-in AI Integration - Teams designed from inception to work with AI tools and processes
  2. Operational Patterns - Different working methods and collaboration styles optimized for AI assistance
  3. Cultural Foundation - Organizational culture that naturally incorporates AI capabilities

Adaptation Challenges:

  • Rapid Capability Changes: AI capabilities evolve every 6, 12, or 18 months
  • Constant Evolution: Today's AI-native company may be outdated in 12 months without continuous adaptation
  • Retrofit Requirements: Even AI-native teams need to continuously update their approaches

Dynamic Competitive Landscape:

  • Moving Target: What constitutes "AI-native" changes as technology advances
  • Continuous Learning: Teams must stay current with emerging AI capabilities
  • Strategic Flexibility: Success requires balancing native AI design with adaptive capacity

Timestamp: [9:50-10:38]Youtube Icon

๐Ÿ”’ What Security and Trust Challenges Do AI Agents Face?

Security Models and Trust Architecture

On-Demand Code Security:

  • Database Layer Access: LLMs need ability to reach database level for customer-specific actions
  • Trust Prerequisites: Cannot implement without reliable controls and model trustworthiness
  • Risk Management: Balancing functionality with security requirements

Agent Collaboration Challenges:

Personal vs. Professional Integration:

  1. User Expectations - Users want one unified agent for all activities
  2. Information Segregation - Personal information must stay separate from professional contexts
  3. Employer Privacy - Personal activities should remain private from employers

Walled Garden Problems:

  • Multiple Agent Reality: Current trend toward different agents for different settings
  • User Experience Conflict: Users prefer unified experience over fragmented agent ecosystem
  • Integration Complexity: Technical challenges in enabling secure agent collaboration

Corporate Agent Alignment:

Trust Beyond AI Models:

  • Perfect Alignment Assumption: Even with perfectly aligned AI models, trust issues remain
  • Corporate Implementation: Startups and corporations build agents with their own interests
  • User vs. Company Interests: Potential conflicts between user needs and corporate goals

Bias and Manipulation Risks:

  • Ad-Based Models: Search agents may be biased toward profitable recommendations
  • Hidden Agendas: Agents might optimize for company benefit rather than user benefit
  • Capability Amplification: Risks increase as models become more capable

Personal-Professional Agent Boundaries:

  • Dual Loyalty Problem: Agents serving both personal and professional needs
  • Optimization Conflicts: Company-controlled agents might prioritize employer interests in personal decisions
  • Escalating Concerns: More capable models create greater potential for manipulation

Timestamp: [10:38-12:52]Youtube Icon

โš ๏ธ Why Is Trust in AI Companies More Critical Than Trust in AI Models?

Human Guardrails and Corporate Accountability

Traditional Trust Mechanisms:

Diversity as a Safety Net:

  • People-Based Trust: Companies are trusted partly because they employ diverse groups of people
  • Cultural Safeguards: Reasonable company culture creates internal checks and balances
  • Whistleblower Protection: Employees can raise concerns, leak information, or quit in protest

Collective Accountability:

  1. Internal Resistance - Employees can oppose bad CEO decisions
  2. Public Consequences - Bad actors face reputational and operational consequences
  3. Human Dependencies - Companies need people to function, creating natural constraints

Semi-Automated World Risks:

Reduced Human Oversight:

  • Single Point of Failure: One person could make decisions affecting entire products
  • Limited Awareness: Fewer people aware of critical decisions being made
  • Easier Bad Actions: Significantly easier for bad actors to cause harm

Historical Context:

  • Silicon Valley Reality: History shows many people become misaligned when money is involved
  • Human Nature: Majority of people can be compromised under the right circumstances
  • Amplified Risk: Smaller teams with AI amplification increase potential for abuse

Enterprise Trust Patterns:

Current Distrust Factors:

  • Business Continuity: Enterprises worry startups will go out of business
  • Easier Misconduct: Small startups can more easily "do the wrong thing" compared to large companies
  • Operational Advantages: Startup agility sometimes comes from bypassing traditional safeguards

Expanding Concern:

  • Enterprise Awareness: Large companies already consider these trust issues
  • Consumer Impact: Everyday people will increasingly face these same concerns
  • Trust Deficit: Growing gap between AI capabilities and trustworthy implementation

Timestamp: [13:04-14:46]Youtube Icon

๐Ÿ›ก๏ธ What New Guardrails Are Needed for AI-Powered Companies?

AI-Powered Auditing and Trust Infrastructure

The Guardrail Gap:

  • Lost Human Safeguards: Traditional company ethics relied on diverse human teams with strong culture
  • Cultural Dependency: Previous trust models assumed collections of people who care about doing the right thing
  • New Requirements: Need alternative mechanisms when human guardrails are reduced

AI-Powered Auditing Solutions:

Advantages Over Human Auditors:

  1. Reduced Bias - AI auditors can be less biased than human counterparts
  2. Memory Deletion - AI can delete all audit information after completion
  3. Information Security - No risk of auditors taking sensitive information with them

Implementation Framework:

  • Company Agreement: Companies voluntarily submit to AI auditing
  • Public Mission Verification: AI confirms company adherence to stated public commitments
  • Self-Deletion Protocol: AI auditor deletes itself and all notes if no malfeasance is found

Current Auditing Landscape:

Existing Audit Types:

  • Legal Compliance: Required audits for legal and regulatory reasons
  • Financial Verification: Audits for financial accuracy and transparency
  • Certification Processes: Audits for specific standards (organic certification, etc.)

Expansion Opportunities:

  • Ethical Compliance: New categories of audits for AI ethics and user protection
  • Algorithmic Transparency: Auditing AI decision-making processes
  • Trust Verification: Systematic verification of company trustworthiness claims

Timestamp: [14:46-15:55]Youtube Icon

๐Ÿ’Ž Summary from [8:03-15:55]

Essential Insights:

  1. Trust as the Central Theme - Trust in AI systems, agents, and the companies building them will determine success across all aspects of AI product development
  2. Strategic Decision Framework - Founders must move beyond opinions to identify causal mechanisms and validate hypotheses about retrofit vs. ground-up approaches
  3. Human Guardrail Crisis - Traditional trust mechanisms based on diverse human teams are disappearing, requiring new AI-powered auditing and verification systems

Actionable Insights:

  • Product Strategy: Test both retrofit and AI-native approaches based on your specific vertical rather than assuming one is universally better
  • Team Building: Design AI-native organizational patterns while maintaining flexibility to adapt as AI capabilities evolve every 6-18 months
  • Trust Infrastructure: Implement transparent auditing mechanisms and public accountability measures to build user trust in an era of smaller, AI-augmented teams
  • Security Planning: Develop robust information segregation systems for agents that need to operate across personal and professional boundaries
  • User Experience: Focus on contextual, multimodal interfaces that meet users where they are rather than forcing single interaction methods

Timestamp: [8:03-15:55]Youtube Icon

๐Ÿ“š References from [8:03-15:55]

People Mentioned:

  • Jordan Fisher - Co-founder & CEO of Standard AI, now leads AI alignment research team at Anthropic, presenting insights on AI startup strategy

Companies & Products:

  • Standard AI - Jordan Fisher's company, mentioned in context of his background and experience
  • Anthropic - AI safety company where Jordan Fisher now leads alignment research

Concepts & Frameworks:

  • Generative UI - Emerging interface paradigm that hasn't fully materialized but shows promise for AI-native products
  • Multimodality - Integration of auditory, visual, video, and text inputs in AI interfaces
  • AI-Native Teams - Organizational structures built from inception to work with AI tools and processes
  • Agent Alignment - The challenge of ensuring AI agents act on behalf of users rather than corporate interests
  • AI-Powered Auditing - Proposed solution using AI systems to audit companies for ethical compliance and trustworthiness
  • Walled Gardens - Isolated AI systems that prevent seamless agent collaboration across different contexts
  • Semi-Automated Teams - Future organizational structures with minimal human oversight and maximum AI integration

Timestamp: [8:03-15:55]Youtube Icon

๐Ÿ” How can AI-powered auditing systems build trust for startups?

Revolutionary Transparency Through Technology

The Trust Crisis Challenge:

  • Traditional auditing limitations - Human auditors can steal IP, discover unrelated sensitive information, and create security risks
  • Public commitments lack teeth - Companies routinely make statements about caring for users, open source, and doing the right thing without accountability
  • Need for binding verification - Moving beyond empty promises to actual enforceable standards

AI-Powered Auditing Solution:

  1. Comprehensive monitoring - Neutral AI systems inspect every company communication, including Slack messages and internal decisions
  2. Mission alignment verification - Automated checking that all company decisions actually align with stated mission statements
  3. Continuous oversight - Ongoing audit processes rather than periodic human reviews

Implementation Framework:

  • Neutral arbiters - Independent AI systems without conflicts of interest
  • Complete transparency - Willingness to open all company operations to scrutiny
  • Binding commitments - Making public statements that carry real consequences through automated enforcement

This technology isn't available today but represents the future of corporate accountability and user trust.

Timestamp: [16:02-17:23]Youtube Icon

โš–๏ธ What alignment problems must AI startups solve for economic viability?

Critical Alignment Challenges for Long-Horizon Agents

The Economic Pressure Point:

  • Control vs. economics - Alignment isn't just about keeping AI under human control, but making models economically viable
  • Time horizon expansion - As AI agents work for longer periods (days or weeks instead of minutes), trust requirements increase exponentially
  • Review limitations - Current models like Claude work for 5-minute intervals with human review, but longer autonomy demands higher reliability

Essential Alignment Requirements:

  1. Reliability assurance - Certainty that AI won't go completely off the rails during extended operation periods
  2. Economic viability - Making long-horizon agents trustworthy enough for practical business use
  3. Intervention timing - Determining optimal human oversight intervals without destroying efficiency

Market Opportunity:

  • Positive economic pressure - Market demand for reliable long-horizon agents creates natural incentives for alignment research
  • Competitive advantage - Companies solving these alignment challenges first will dominate the autonomous AI market
  • Open questions remain - Unclear exactly which aspects of alignment need solving and to what degree

The next 12 months will be critical for determining which alignment problems are essential for commercial AI deployment.

Timestamp: [17:28-18:29]Youtube Icon

๐Ÿ“Š Does proprietary data still provide competitive advantages in the LLM era?

The Evolution of Data-Driven AI Advantages

Historical Context:

  • Pre-LLM reality - Custom datasets were the only path to useful AI systems
  • Training requirements - Companies needed massive proprietary datasets to train effective models
  • Enterprise advantage - Organizations with unique data had insurmountable competitive moats

The LLM Disruption:

  • General model superiority - Frontier LLMs became more powerful than custom-trained models
  • Fine-tuning decline - Even fine-tuning on proprietary data often performed worse than general models
  • Internet knowledge dominance - LLMs excel at everything available online but struggle with specialized domains

Remaining Data Advantages:

  1. Tacit knowledge domains - Industries where critical knowledge hasn't leaked to the internet
  2. Material science applications - Specialized fields requiring decades of proprietary research data
  3. Manufacturing secrets - Companies like TSMC and ASML maintain multibillion-dollar knowledge advantages

Strategic Implications:

  • Defensible positions - Startups should target industries with protected proprietary knowledge
  • Semiconductor example - Frontier LLMs cannot build cutting-edge semiconductor fabs due to closely guarded trade secrets
  • Competitive moats - Companies with genuine proprietary data in specialized domains maintain significant advantages

Timestamp: [18:34-20:09]Youtube Icon

โšก How can AI startups overcome GPU capacity constraints for competitive advantage?

Technical Optimization Strategies for Resource-Constrained Growth

The Capacity Crisis:

  • Demand vs. supply - Consumer and startup demand for 100x scaling over the next couple years
  • GPU production limits - Hardware manufacturing cannot keep pace with AI scaling requirements
  • Universal challenge - Every AI company faces the same fundamental resource constraints

Technical Solutions:

  1. Fine-tuning revival - Reconsidering fine-tuning strategies abandoned by many companies
  2. Context management optimization - Improving how models handle and process contextual information
  3. Smart model routing - Strategic switching between small and large models based on task requirements

Competitive Advantage Window:

  • Technical differentiation - Companies mastering these optimization techniques gain 1-2 year advantages
  • Product development philosophy - "Make it great, then make it scale" - but scaling work becomes critical early
  • Technical moat building - Optimization expertise creates defensible competitive positions

Temporary Nature:

  • Rat race reality - Advantages from capacity optimization are ultimately temporary
  • Model improvements - Better models and increased capacity will eventually eliminate these advantages
  • Durable strategy needed - Companies must develop longer-term competitive moats beyond technical optimization

Timestamp: [20:15-21:21]Youtube Icon

๐Ÿฐ What creates durable competitive advantages in a post-AGI world?

Building Moats When AI Can Replicate Any Startup

The Post-AGI Challenge:

  • Replication threat - In 2-3 years, Claude 7 or GPT-7 might replicate any startup through simple prompting
  • Mega corp advantage - Large corporations with more resources can throw more tokens at problems
  • Existential question - Will startups have any durable advantages or get completely dominated?

Hard Problems Strategy:

  1. Physical world constraints - Problems requiring real-world implementation and manufacturing
  2. Infrastructure challenges - Energy, manufacturing, and chip production remain difficult
  3. Robotics lag - Physical automation trails behind software capabilities, creating opportunity windows

Durable Advantage Categories:

  • TSMC and ASML model - Companies solving genuinely hard problems that require specialized expertise
  • Manufacturing complexity - Physical production challenges that can't be solved through software alone
  • Energy infrastructure - Power generation and distribution remain complex engineering challenges

Strategic Framework:

  • Courage requirement - Willingness to tackle genuinely difficult problems rather than easy software solutions
  • Massive competitive advantage - Hard problems create sustainable moats even against well-funded competitors
  • Future-proofing - Selecting problems that will remain challenging even with advanced AI assistance

The key question: What problems will still be hard and worth solving in an AGI world?

Timestamp: [21:28-22:39]Youtube Icon

๐Ÿ“ˆ Is there an intelligence ceiling that accelerates AI commoditization?

Understanding Task Saturation in AI Development

The Ceiling Hypothesis:

  • Task-specific limits - Different applications may have maximum useful intelligence levels
  • Saturation points - Some tasks might reach "good enough" status where additional intelligence adds no value
  • Commoditization acceleration - Once tasks hit their ceiling, competitive advantages disappear rapidly

Recent Evidence:

  1. Video generation breakthrough - Veo3 and advanced video generation finally creating convincing content
  2. Social media transformation - Instagram feeds filling with AI-generated content that users actually enjoy
  3. Rapid improvement plateau - Dramatic quality jumps followed by potential saturation points

Task-Specific Analysis:

  • Creative tasks - Writing poems, generating images, creating videos may have natural quality limits
  • Technical tasks - Code generation, git diffs, and programming assistance might reach optimal performance levels
  • Vertical dependencies - Different industries and use cases will have varying intelligence ceilings

Strategic Implications:

  • Commoditization timing - Tasks with lower ceilings will commoditize faster than those requiring unlimited intelligence
  • Competitive positioning - Companies can't stay ahead by simply upgrading to newer models once saturation hits
  • Market dynamics - Understanding which tasks have ceilings helps predict competitive landscape evolution

The critical question: Which tasks will saturate first, and how can startups position themselves accordingly?

Timestamp: [22:45-23:44]Youtube Icon

๐Ÿ’Ž Summary from [16:02-23:57]

Essential Insights:

  1. AI-powered auditing revolution - Future trust-building through comprehensive, neutral AI systems that monitor all company operations and verify mission alignment
  2. Alignment economics - The next 12 months are critical for solving alignment problems that make long-horizon AI agents economically viable for business use
  3. Data advantage evolution - While general LLMs dominate most domains, companies with proprietary tacit knowledge in specialized fields like manufacturing still maintain competitive advantages

Actionable Insights:

  • Target hard problems - Focus on infrastructure, energy, manufacturing, and physical world challenges that will remain difficult even in a post-AGI world
  • Optimize for capacity constraints - Master fine-tuning, context management, and model routing to gain 1-2 year competitive advantages during GPU scarcity
  • Understand task saturation - Identify which applications have intelligence ceilings to predict commoditization timing and position strategically
  • Build durable moats - Develop competitive advantages that can't be replicated by simply prompting advanced AI models or throwing more resources at problems

Timestamp: [16:02-23:57]Youtube Icon

๐Ÿ“š References from [16:02-23:57]

People Mentioned:

  • Claude - Anthropic's AI assistant referenced as example of current 5-minute interval AI capabilities

Companies & Products:

  • TSMC - Taiwan Semiconductor Manufacturing Company, cited as example of company maintaining proprietary manufacturing knowledge that LLMs cannot replicate
  • ASML - Dutch semiconductor equipment manufacturer, referenced for keeping multibillion-dollar tacit knowledge in-house
  • Anthropic - AI safety company, mentioned in context of Claude AI assistant capabilities
  • Claude 7 - Hypothetical future version of Anthropic's AI assistant used in post-AGI scenarios
  • GPT-7 - Hypothetical future OpenAI model referenced in competitive advantage discussions

Technologies & Tools:

  • Veo3 - Advanced video generation AI model mentioned as breakthrough in creating convincing video content
  • Slack - Communication platform referenced in context of comprehensive AI auditing systems
  • Instagram - Social media platform mentioned as example of AI-generated content proliferation

Concepts & Frameworks:

  • Long-horizon agents - AI systems designed to work autonomously for extended periods (days or weeks) without human intervention
  • Tacit knowledge - Specialized expertise and know-how that companies keep proprietary and hasn't leaked to public internet
  • Post-AGI world - Future scenario where artificial general intelligence can replicate most startup capabilities
  • Intelligence ceiling - Theoretical maximum useful intelligence level for specific tasks beyond which additional capability adds no value

Timestamp: [16:02-23:57]Youtube Icon

๐Ÿ›๏ธ What happens when AI companies control what AI can and cannot do?

AI Neutrality and Corporate Control

Jordan Fisher raises critical concerns about the concentration of power in AI development and deployment:

The Core Problem:

  1. Corporate Gatekeeping - A handful of corporations will decide what AI systems can and cannot do for users
  2. Refusal Mechanisms - When AI models refuse requests, these companies become arbiters of acceptable AI behavior
  3. Infrastructure Dependency - As society relies more on AI, these decisions shape what gets built and developed

The Neutrality Question:

  • Historical Precedent: Electrical infrastructure operates as neutral utility - no company can dictate which appliances work on the grid
  • Web Comparison: "We fought and lost this battle for the web" - referring to platform control and gatekeeping
  • AI Infrastructure: Need for "AI neutrality" or "token neutrality" to prevent monopolistic control

Critical Implications:

  • Societal Impact: Companies controlling AI access effectively control technological development
  • Innovation Barriers: Selective AI access could stifle competition and innovation
  • Democratic Concerns: Private corporations making decisions that affect entire societies

Timestamp: [24:03-24:55]Youtube Icon

๐ŸŒ Why do people ask "how do we make money" when facing humanity's biggest transformation?

The Money Question vs. World-Changing Opportunity

Fisher expresses disappointment with the immediate focus on profit when discussing AGI's transformative potential:

The Pattern He Observes:

  1. Universal Recognition - Regular people understand AGI's significance when explained
  2. Visceral Awareness - They grasp that we're facing "humanity defining, society defining" changes
  3. The Inevitable Question - After understanding the magnitude, they ask: "How do we make money off this?"

Why This Happens:

  • Economic Fear: Concern about losing jobs or becoming uncompetitive
  • Startup Anxiety: Fear that the opportunity to build companies may close forever
  • Survival Instinct: "I better make my mark right now, I better make my money right now"
  • Understandable Response: Fisher acknowledges these fears are totally rational

The Missed Opportunity:

  • Last Chance Impact: This might be the last product or company you build
  • Historical Moment: Final opportunity to make a world-changing difference
  • Unique Positioning: Founders have the perspective and skills to drive positive change during rapid transformation

Fisher's Balanced Approach:

  • Practical Support: Happy to help brainstorm money-making strategies
  • Higher Calling: Emphasizes using this moment for meaningful impact
  • Urgency: If you care about something, "now is the time to do it"

Timestamp: [25:32-27:58]Youtube Icon

๐ŸŽฏ What does "build something people want" really mean in the AGI era?

Redefining YC's Famous Slogan

Fisher challenges founders to think deeper about Y Combinator's core principle:

Beyond Surface-Level Wants:

  1. Trust-Centered Products - People want agents and bots they can trust
  2. Long-term Thinking - Products good for mental health over 20 years, not just 20 seconds of delight
  3. Societal Benefit - What people want often aligns with what's good for society

The Deeper Question:

  • Consumer vs. Societal Needs: Don't just think about what people will consume
  • Community Impact: Consider effects on children, neighbors, and broader society
  • Genuine Value: "What does society need?" - if you build the right thing, people will want it

Founder Advantage:

  • Unique Perspective: Founders think differently than most people
  • Edge-Finding Ability: The core job of being a founder is finding advantages
  • Rapid Adaptation: Rules change every six months - founders must think continuously
  • Bleeding Edge Position: Best positioned to understand changes and drive positive impact

The Call to Action:

  • Make Money While You Can: Practical acknowledgment of economic realities
  • Drive Positive Change: Use insights from rapid changes to benefit society
  • Think Beyond Consumption: Focus on what the world genuinely needs

Timestamp: [27:58-29:53]Youtube Icon

๐Ÿฆ How does Jordan Fisher stay informed about AI developments?

Information Diet and Mental Model Building

When asked about sources for building his mental model around AGI:

Primary Information Source:

  • Twitter/X as Main Platform: Despite hesitation to admit it, Fisher relies heavily on Twitter
  • Religious Curation: Extremely disciplined about following and unfollowing accounts
  • Quality Control: Follows people with good takes, unfollows those with poor insights

Curation Philosophy:

  1. Energy Budget Management - Limited capacity to digest new ideas requires selectivity
  2. Diversity Over Agreement - Don't just follow people you agree with
  3. Exploration vs. Exploitation - Borrowing from reinforcement learning concepts

Strategic Approach:

  • Master Your Information Diet: Be intentional about information consumption
  • Maximize for Diversity: Seek varied perspectives and viewpoints
  • Exploration Phase: Gather diverse information before "exploitation" (starting a company)
  • Quality Over Quantity: Focus on sources that provide genuine insights

Timestamp: [30:32-31:19]Youtube Icon

๐Ÿ›ก๏ธ Should startup ideas be chosen based on AGI defensibility?

Startup Strategy in an AGI World

An audience member asks about prioritizing AGI-resistant ideas over passion or market opportunity:

The Strategic Question:

  • Traditional Factors: Passion, experience, expertise in the domain
  • Market Considerations: Underserved markets, less competitive spaces
  • AGI Factor: Ideas most defensible against AGI disruption

Fisher's Perspective:

  • Timeless Principles: Many of these were good questions even before AGI concerns
  • Reality of Execution: After 6 months of 100-hour work weeks, passion alone isn't enough to sustain you
  • Practical Considerations: The grind of startup life tests all motivations

Implied Framework:

  • Multi-Factor Decision: No single criterion should dominate startup choice
  • Execution Reality: Consider your ability to persist through difficult periods
  • Balance Required: AGI defensibility is important but not the only consideration

Timestamp: [31:19-31:57]Youtube Icon

๐Ÿ’Ž Summary from [24:03-31:57]

Essential Insights:

  1. AI Neutrality Crisis - A handful of corporations will control what AI can do, potentially requiring infrastructure-level neutrality like electrical grids
  2. Money vs. Mission Tension - While economic fears about AGI are understandable, this may be the last opportunity to build world-changing products
  3. Redefining Success Metrics - "Build something people want" should focus on long-term societal benefit, not just immediate consumption

Actionable Insights:

  • Information Diet Strategy - Curate diverse, high-quality sources (Fisher uses Twitter religiously) while maximizing exploration over agreement
  • Startup Decision Framework - Consider AGI defensibility alongside passion and market factors, but remember execution reality matters most
  • Impact Timing - If you care about making a difference, act now while individual founders can still drive meaningful change

Timestamp: [24:03-31:57]Youtube Icon

๐Ÿ“š References from [24:03-31:57]

Companies & Products:

  • General Electric (GE) - Used as example of potential infrastructure monopoly with electrical grids
  • Y Combinator - Referenced for their famous "build something people want" slogan and startup philosophy

Technologies & Tools:

  • Twitter/X - Fisher's primary source for staying informed about AI developments and industry insights
  • Reinforcement Learning - Mentioned in context of exploration vs. exploitation strategies for information consumption

Concepts & Frameworks:

  • AI Neutrality/Token Neutrality - Proposed concept for preventing corporate control over AI infrastructure access
  • Information Diet Curation - Strategic approach to managing information consumption for optimal learning
  • Exploration vs. Exploitation - Reinforcement learning concept applied to information gathering strategies

Timestamp: [24:03-31:57]Youtube Icon

๐ŸŽฏ What drives startup founders when building becomes unbearable?

Founder Motivation & Commitment

Building a startup involves inevitable periods of extreme difficulty that will test every founder's resolve. The key differentiators for persistence aren't always what you might expect.

Core Drivers That Matter:

  1. Impact Orientation - Your desire to create meaningful change in the world
  2. Team Commitment - Deep loyalty to co-founders and team members
  3. Company Mission - Unwavering belief in what you're building

What Matters Less Than Expected:

  • Domain Passion - While helpful, extreme passion for your specific industry isn't necessarily required
  • Personal Interest - You don't need to love every aspect of the problem you're solving

The Reality Check:

When everything goes wrong and you're facing the hardest moments of entrepreneurship, only your commitment to impact and your team will sustain you through the challenges. The work itself will often be frustrating and difficult regardless of your initial enthusiasm for the domain.

Timestamp: [32:03-32:21]Youtube Icon

๐Ÿ›ก๏ธ How should AI startups think about defensibility in the AGI era?

Strategic Defensibility Planning

The question of defensibility becomes critical when considering whether your startup will survive the rapid advancement toward AGI. Your time horizon and goals fundamentally shape this decision.

Short-Term vs. Long-Term Strategy:

  1. 6-18 Month Horizon - Quick monetization strategies can work without strong defensibility
  • Focus on rapid ARR growth
  • Optimize for quick exits or flips
  • Capitalize on current market opportunities
  1. Long-Term Vision - Building through the "singularity transition" requires deeper thinking
  • Defensibility becomes paramount
  • Must withstand technological disruption
  • Needs sustainable competitive advantages

Key Considerations:

  • Market Timing - Will your solution become a "rounding error" in six months?
  • Technology Moats - What prevents AI from commoditizing your offering?
  • Strategic Value - How does your company remain relevant post-AGI?

The Bottom Line:

If you're optimizing for quick returns, defensibility may be less critical. But if you want to build something that stands the test of time through the AI transition, defensibility is probably the most important factor to consider.

Timestamp: [32:21-33:01]Youtube Icon

๐Ÿ’ฐ Will money become more or less valuable as AI reduces costs?

Economic Value in the AGI Era

The relationship between money, goods, and services will fundamentally shift as AI drives costs toward zero, but the outcome depends heavily on policy decisions we make today.

Policy-Dependent Scenarios:

  1. Universal Basic Income (UBI) - Government-distributed income to all citizens
  2. Universal Basic Compute - Distributed access to computational resources
  3. Current System - Maintaining capital and labor dynamics

The Power Concentration Risk:

  • Today's Dynamic - Capital and labor create checks and balances
  • Post-AGI Reality - Labor becomes unnecessary, eliminating worker leverage
  • Dangerous Outcome - "Capital begets capital" without labor constraints

Critical Concerns:

  • Government Power - UBI gives extreme control over society
  • Wealth Concentration - Without labor, capital owners face no resistance
  • Democratic Balance - Traditional checks on power may disappear

The Dilemma:

We're caught between two problematic scenarios: either unprecedented government control through UBI or unchecked wealth concentration through pure capital dominance. The policy decisions we make now will determine which path we take.

Timestamp: [33:12-34:47]Youtube Icon

๐ŸŽญ How should AI systems handle user preferences versus user values?

Alignment at the Individual Level

The challenge of aligning AI with individual users becomes complex when you consider the difference between what users want in the moment versus what they truly value long-term.

The Sycophantic Response Problem:

  • Immediate Preference - Users often choose flattering, agreeable responses
  • Example Scenario - When presented with two responses, users typically pick the one that praises their question
  • Short-term Satisfaction - "Of course your question is great!" feels good in the moment

The Values-Based Approach:

When you ask users to choose between principles:

  1. Honest Feedback Principle - "We'll only praise ideas that are genuinely good"
  2. Constant Praise Principle - "We'll always tell you what you want to hear"

Result: Almost everyone chooses the honest feedback principle when framed this way.

The Key Insight:

  • Level of Engagement Matters - Different ways of asking yield different answers
  • Potential for Manipulation - Companies can exploit this by only asking questions in certain ways
  • True User Benefit - You must ask yourself what's genuinely best for the user

Startup Product Lens:

While users often don't know exactly what they want, they do have underlying values. The goal is to discover and honor those values rather than just satisfying immediate preferences.

Timestamp: [34:54-36:42]Youtube Icon

๐Ÿค” What does the tech industry get wrong about innovation?

Groupthink in a "Forward-Looking" Industry

Despite the tech industry's self-image as bold and forward-thinking, there's actually an extreme amount of groupthink that limits true innovation and strategic thinking.

The Innovation Paradox:

  • Industry Self-Perception - Bold, risk-taking, visionary
  • Actual Reality - Extreme groupthink in products, funding, and strategy

Evidence of Groupthink:

  1. Product Development - Similar solutions across companies
  2. VC Funding Patterns - Investors follow trends rather than leading them
  3. Strategic Thinking - Reactive rather than proactive planning

The VC Problem:

  • Current Mindset - "I'm ahead of the curve investing in AI"
  • Reality Check - "You're already two years behind"
  • Missing Questions - VCs rarely ask: "What do I need to invest in today so it's resilient in two years?"

The Forward-Thinking Gap:

Most investors and founders aren't asking the right questions about building resilience for future technological shifts. True forward-thinking requires anticipating what will be needed years ahead, not just following current trends.

Investment Philosophy:

As a founder, the key question should be: "What needs to be built today to remain relevant and valuable in two years?" This type of strategic thinking is rare but essential.

Timestamp: [36:48-37:51]Youtube Icon

โ›“๏ธ Could blockchain technology help solve future trust problems?

Blockchain's Role in AI-Era Trust

While maintaining skepticism about blockchain technology, there are specific scenarios in the AI era where blockchain concepts might actually provide valuable solutions to trust challenges.

Personal Blockchain Stance:

  • General Skepticism - "I'm a huge blockchain doubter"
  • Market Reality - Prices keep rising despite personal doubts
  • Investment Position - Refuses to buy due to skepticism

Potential AI-Era Applications:

  1. AI-Powered Audits - Different AI companies could audit each other using blockchain verification
  2. Universal Basic Systems - UBI or basic compute distribution might need blockchain mediation
  3. Government Independence - Blockchain could prevent central government control over basic resource distribution

Trust-Building Scenarios:

  • Cross-AI Verification - Blockchain could enable trustless auditing between AI systems
  • Resource Distribution - Decentralized systems for distributing universal benefits
  • Transparency Mechanisms - Immutable records for AI decision-making processes

The Pragmatic View:

While remaining generally skeptical of blockchain hype, the specific trust challenges that emerge in an AI-dominated world might create legitimate use cases for blockchain technology that don't exist today.

Timestamp: [37:58-38:50]Youtube Icon

๐Ÿค– Why is agent-to-agent communication harder than it appears?

The Complexity of AI Agent Interactions

Building systems where AI agents communicate effectively involves subtle challenges that aren't immediately obvious, even for seemingly simple tasks like scheduling meetings.

The Meeting Scheduling Example:

Appears Simple: Look at calendar, suggest available times Actually Complex: Game theory and power dynamics are crucial

Hidden Complexity Factors:

  1. Availability Signaling - Being too liberal with free slots communicates the person isn't busy or important
  2. Power Dynamics - Suggesting meetings "two weeks out" vs. immediate availability sends different messages
  3. Strategic Communication - Good human assistants understand these implicit rules

The Game Theory Component:

  • Information Asymmetry - What you reveal about availability matters
  • Status Signaling - Scheduling patterns communicate importance and demand
  • Relationship Management - Different approaches for different relationship types

Why This Is Hard for AI:

  • Implicit Knowledge - Human assistants know these rules intuitively
  • Contextual Awareness - No concrete information source contains these social dynamics
  • Semantic Subtlety - The important information is cultural and contextual, not technical

The Broader Implication:

Many agent-to-agent interactions that seem straightforward actually involve complex social and strategic elements that are difficult to encode in AI systems.

Timestamp: [39:16-40:15]Youtube Icon

๐Ÿ’Ž Summary from [32:03-40:27]

Essential Insights:

  1. Founder Resilience - Impact orientation and team commitment matter more than domain passion for surviving startup challenges
  2. Strategic Defensibility - Long-term success requires thinking beyond quick wins to build something that survives the AI transition
  3. Economic Transformation - Money's future value depends on policy decisions around UBI and wealth concentration in a post-labor world

Actionable Insights:

  • Focus on defensibility if building for the long term rather than quick exits
  • Consider the difference between user preferences and user values when designing AI systems
  • Ask forward-looking questions about what will be resilient in two years, not what's trending now
  • Recognize that agent-to-agent communication involves complex social dynamics beyond technical protocols

Timestamp: [32:03-40:27]Youtube Icon

๐Ÿ“š References from [32:03-40:27]

People Mentioned:

  • Jordan Fisher - Co-founder & CEO of Standard AI, now leads AI alignment research team at Anthropic

Companies & Products:

  • Standard AI - Jordan Fisher's previous company focused on AI solutions
  • Anthropic - AI safety company where Jordan now leads alignment research
  • Google - Recently released protocol to standardize agent-to-agent communication

Technologies & Tools:

  • Universal Basic Income (UBI) - Government-distributed income system discussed as potential policy response to AI displacement
  • Universal Basic Compute - Proposed system for distributing computational resources as basic entitlement
  • Blockchain Technology - Discussed as potential solution for trust and verification in AI systems

Concepts & Frameworks:

  • AI Alignment - The challenge of ensuring AI systems behave in accordance with human values and intentions
  • Sycophantic AI Behavior - AI tendency to give users flattering responses rather than honest feedback
  • Game Theory in AI - Strategic considerations in agent-to-agent interactions, particularly in scheduling and communication
  • AGI Defensibility - Strategic planning for startup survival through the transition to Artificial General Intelligence

Timestamp: [32:03-40:27]Youtube Icon