undefined - Bucky Moore: The Next Decade of AI Infrastructure

Bucky Moore: The Next Decade of AI Infrastructure

This week, Lightspeed Partner Mike Mignano sits down with his colleague Bucky Moore, a fellow partner at Lightspeed to explore the rapidly shifting landscape of AI and infrastructure. They unpack the evolution from hardware to cloud to AI-native architectures, the growing momentum behind open-source models, and the rise of AI agents and reinforcement learning environments. Bucky also shares how his early days at Cisco shaped his bottom-up view of enterprise software, and why embracing the new is...

โ€ขJune 26, 2025โ€ข48:17

Table of Contents

0:02-8:11
8:17-15:29
15:52-24:34
24:41-28:57
29:02-32:57
33:03-39:46
39:52-42:46
42:53-48:11

๐ŸŽ™๏ธ Introduction to the Conversation

Mike Mignano introduces this week's episode of Generative Now, highlighting his excitement about sitting down with Bucky Moore, his new partner at Lightspeed. Mike positions Bucky as one of the sharpest thinkers in enterprise software and AI, spanning foundational AI to infrastructure to cybersecurity.

The conversation promises to explore what's currently on Bucky's radar, how his investing philosophy has evolved, and his predictions for the next decade of AI innovation. Mike emphasizes Bucky's particular gift for spotting spiky founders early and helping them scale big ideas into world-changing companies.

Timestamp: [0:02-0:41]Youtube Icon

๐Ÿค Partnership Dynamics at Lightspeed

The conversation opens with warm camaraderie between the two partners, with Bucky having joined Lightspeed about a month and a half prior to this recording (starting the last week of April). Mike shares his perspective on knowing Bucky's reputation before they worked together directly.

Mike reveals his admiration for Bucky's expertise in infrastructure investing, acknowledging that while he tends to focus more on consumer and some enterprise investments, he has always viewed Bucky as "the infra guy" - one of the best infrastructure VCs in the industry. This sets up an interesting dynamic where Mike openly admits infrastructure is "a world that is so far from me," creating an opportunity for learning and knowledge sharing.

Timestamp: [0:47-1:36]Youtube Icon

๐Ÿ’ผ From Private Equity to Cisco: The Foundation

Bucky traces his career trajectory, starting with his undergraduate transition into private equity, which he "absolutely hated" for fundamental philosophical reasons. He explains that private equity is primarily a game of value extraction, while he was drawn to value creation - particularly in technology where he observed all the real value being generated.

This dissatisfaction prompted his move to the Bay Area to become part of the technology industry. His entry into Cisco's corporate development team came through networking - connecting with someone working on an acquisition integration who introduced him to the corp dev team. He emphasizes entering this role "very naive," not knowing basic infrastructure distinctions like the difference between a router and a switch.

The role at Cisco involved looking for startups that were compelling and relevant to Cisco's future ambitions, with the mandate to either invest in them, acquire them, or partner with them. This became Bucky's first exposure to founders and shaped his bottom-up approach to analyzing opportunities.

Timestamp: [1:42-4:37]Youtube Icon

๐Ÿ—๏ธ Infrastructure's Massive Surface Area

Bucky describes how his vantage point at Cisco trained him to examine opportunities from the bottom up rather than top down. He became enamored with the incredible impact that infrastructure companies can have across every industry and companies of different sizes, noting their broad surface area of influence.

He reinforces this perspective by highlighting how some of the most valuable public companies today are effectively infrastructure companies. Looking at the NASDAQ's top 10 most valuable companies by trading multiples, he points to DataDog, Snowflake, and CrowdStrike as prime examples of infrastructure companies that have achieved massive scale.

Bucky acknowledges that infrastructure companies take longer to build and involve significant technical risk that must be mitigated early, making them capital intensive and uncertain. However, once they figure out a unique market seam where their technology can apply, these companies can compound over many decades in ways that generic software companies have struggled to achieve, with exceptions like ServiceNow and Salesforce.

He emphasizes that infrastructure is fundamental to every business and industry on earth, especially as more businesses move online and digitize, making the scale of these opportunities massive.

Timestamp: [2:06-3:21]Youtube Icon

๐Ÿ”„ Cisco's Old World Meets New World Transformation

Bucky provides insight into Cisco's evolution beyond its traditional router and switch business, explaining how those industries have commoditized rapidly. Cisco has spent the past couple of decades trying to move increasingly into software businesses, evidenced by acquisitions like Splunk and Meraki.

He characterizes modern Cisco as having a very large, slow-growing legacy hardware and systems business alongside a moderately to high-growing suite of software businesses that represent the company's future. This created a fascinating dynamic where Bucky witnessed "old world new world kind of colliding at the same time."

Bucky recalls some "crazy conversations" from that era, including debates about whether SaaS would become significant or if virtual desktops would dominate, and whether cloud infrastructure was only suitable for test and development environments rather than production workloads.

These experiences taught him two critical lessons: first, that "luddism is inescapable" for large companies, and second, that "it just pays to run towards the new" whether you're an investor or operating a business. He emphasizes that the future typically happens faster than humans can comprehend, and positioning yourself on the right side of history means embracing emerging technologies.

Timestamp: [4:49-6:08]Youtube Icon

๐ŸŒŠ The Nisira Networks Deal: A Formative Experience

Bucky shares details about one of his first major projects at Cisco - codenamed "Northshore" - which involved Nisira Networks, a company founded by Martin Casado (now a partner at Andreessen Horowitz). Nisira was pioneering the transition of networking IP entirely into software, moving from custom ASIC switches to standard x86 Intel servers.

The deal became a formative experience because it highlighted the challenges of corporate luddism. Martin Casado had no interest in selling to Cisco because he was aware that Cisco's typical approach would be to "buy the company and essentially put it in a drawer and let it die." Despite Cisco's interest, Casado chose to sell to VMware instead.

The outcome validated Casado's decision - Nisira became VMware NSX, a product that fundamentally changed the networking industry and sustained VMware as a business. Bucky reflects that had Cisco "run towards the new as fast as we could have instead of getting cute," the networking industry would have developed very differently over the past couple of decades.

This experience reinforced his belief about the inevitability of luddism in large companies and the strategic importance of embracing new technologies rather than protecting legacy business models.

Timestamp: [6:08-7:22]Youtube Icon

โš™๏ธ The Innovator's Dilemma in Action

Bucky and Mike discuss how Cisco's experience exemplifies the classic innovator's dilemma, where incumbents cling to legacy businesses despite technological disruption. Bucky explains how these incentives run deep within organizations - sales representatives make money selling existing products, and business leaders build their empires around those products commanding the most budget and attention.

This dynamic exists across all large companies, though it may not always manifest as cleanly as the hardware-to-software divide seen at Cisco. The structural incentives create natural resistance to change, even when the technological writing is on the wall.

Bucky emphasizes that this reality makes everyone in the startup ecosystem fortunate to work with companies that can move faster and aren't encumbered by these legacy incentives the way large companies are. Startups have the luxury of building for the future without having to protect existing revenue streams or organizational structures.

He concludes by noting that infrastructure tends to move in lockstep with technology cycles, setting up the foundation for understanding how broader technological shifts drive infrastructure innovation and investment opportunities.

Timestamp: [7:22-8:11]Youtube Icon

๐Ÿ’Ž Key Insights

  • Infrastructure companies have massive surface area across every industry and company size, creating enormous scale opportunities
  • The most valuable public companies today (DataDog, Snowflake, CrowdStrike) are infrastructure companies that can compound for decades
  • Legacy companies suffer from inevitable "luddism" - structural resistance to new technologies due to existing incentives and business models
  • The future happens faster than humans can comprehend, making it critical to "run towards the new" rather than protect legacy positions
  • Startups have a significant advantage over incumbents because they aren't encumbered by legacy revenue streams and organizational structures
  • Infrastructure innovation moves in lockstep with broader technology cycles
  • Value creation opportunities in technology far exceed value extraction models like private equity

Timestamp: [0:02-8:11]Youtube Icon

๐Ÿ“š References

People:

  • Martin Casado - Founder of Nisira Networks, now partner at Andreessen Horowitz and good friend/peer of Bucky
  • Mike Mignano - Partner at Lightspeed, host of Generative Now podcast
  • Bucky Moore - Partner at Lightspeed, formerly at Kleiner Perkins

Companies/Products:

  • Nisira Networks - First company moving networking IP entirely into software, founded by Martin Casado
  • VMware NSX - Product that resulted from VMware's acquisition of Nisira, changed the networking industry
  • DataDog, Snowflake, CrowdStrike - Examples of valuable public infrastructure companies
  • ServiceNow and Salesforce - Examples of exceptional generic software companies that have compounded successfully
  • Splunk and Meraki - Examples of Cisco's software acquisitions

Concepts:

  • The Innovator's Dilemma - Referenced framework for understanding how incumbents struggle with disruptive technologies
  • Luddism - Term used to describe resistance to new technology within large organizations
  • SaaS vs Virtual Desktops - Historical debate about software delivery models
  • Custom ASIC vs x86 Intel servers - Technical transition in networking infrastructure

Timestamp: [0:02-8:11]Youtube Icon

๐Ÿ”„ Infrastructure's Technology Cycle Evolution

Bucky explains how infrastructure investment follows predictable technology cycles, with each major computing paradigm fundamentally changing how infrastructure gets built. He traces the progression from personal computing (requiring networked computers), to the internet (enabling globally distributed connections beyond local area networks), to mobile and cloud, and most recently AI.

The pattern is consistent: a new technical innovation enables new patterns, which unlock new workloads, which demand new infrastructure. This creates a cyclical reinvention process approximately every decade, forcing both investors and entrepreneurs to constantly evolve their mindsets and technical understanding.

Bucky recalls meeting founders who had built successful enterprise data center companies but then had to completely reinvent their mindsets and technical assumptions to build for a cloud-native world. The same transformation is happening now with cloud and distributed systems experts who are building AI-era infrastructure companies but must evolve their approaches once again.

Timestamp: [8:17-10:06]Youtube Icon

๐Ÿš€ From Battery Ventures to Kleiner: The Cloud Transition Era

Mike and Bucky clarify Bucky's career timeline - he started in venture at Battery Ventures, not directly from Cisco to Kleiner, and joined Kleiner Perkins in January 2018. This period represented the transition from traditional data center infrastructure (switches and routers) to cloud-native architectures.

During Bucky's time at Cisco (2011-2014), the main innovation focus was helping enterprises modernize their data centers with more modern storage, networking, and compute primitives. This included Cisco on networking, companies like NetApp and Nimble Storage (a Lightspeed company) reinventing storage, and VMware software running on commodity Intel x86 hardware to tie everything together.

However, this world was changing rapidly as Amazon Web Services emerged not just as a place to experiment with new applications, but as a platform where enterprises should move all their applications. AWS provided unprecedented agility and flexibility through on-demand compute and infrastructure resources - primitives that enabled entirely new categories of infrastructure companies to emerge.

Timestamp: [10:12-11:58]Youtube Icon

โ„๏ธ The Snowflake Revolution: 10x Better, 10x Faster

Bucky uses Snowflake as a prime example of how cloud architectures enabled entirely new approaches to previously entrenched markets. Snowflake targeted the data warehousing business that had been dominated by Oracle in partnership with Teradata for decades - a market so deeply entrenched that few believed it could be disrupted.

When cloud computing arrived, it unlocked entirely new architectural possibilities. Snowflake leveraged these cloud-native capabilities to deliver a product that was "10x cheaper, 10x faster" than existing solutions, causing the entire market to rapidly shift in their direction.

This experience taught Bucky a crucial lesson about the accelerating pace of technological adoption. Each subsequent wave of innovation seems to happen faster than the previous one - from on-premise to cloud, PC to mobile, and now cloud to AI. The acceleration occurs because each new layer stacks on top of previous innovations "like layers on a cake," unlocking economic transformation the world hasn't seen before.

Timestamp: [11:58-13:26]Youtube Icon

๐Ÿ—„๏ธ LLMs as the New Database Paradigm

Bucky presents a fascinating framework for understanding LLMs by comparing them to databases - both analytical (like Snowflake) and transactional (where hyperscalers like AWS and Google now dominate with managed services that have displaced Oracle's dominance).

He explains that you query LLMs and retrieve information, similar to querying relational databases, though the process differs in that traditional databases require careful consideration of data structure, layout, query speed, and query types. LLMs, while more stochastic, offer greater flexibility and robustness because they don't necessarily require explicit data input.

If LLMs become as ubiquitous as databases - which come in many different shapes and sizes - they represent a fundamental new way to retrieve information. For developers who will build with them in countless new ways, this flexibility and robustness creates a massive opportunity.

Bucky emphasizes that success in infrastructure often comes down to "finding your way into the developers toolbox." For years, this toolbox included PostgreSQL, Kafka (Confluent's business), and more recently ClickHouse (where Lightspeed is an investor). Now LLMs are entering that same essential toolbox space.

Timestamp: [13:27-15:29]Youtube Icon

๐Ÿ’Ž Key Insights

  • Infrastructure evolution follows predictable cycles: new technical innovations enable new patterns, which unlock new workloads, which require new infrastructure
  • Each technology wave happens faster than the previous one, with innovations stacking like "layers on a cake" to unlock unprecedented economic transformation
  • Cloud computing enabled entirely new architectural approaches that could deliver 10x improvements in cost and performance over entrenched solutions
  • LLMs can be understood as a new type of database - more stochastic but more flexible and robust than traditional relational databases
  • Success in infrastructure often comes down to "finding your way into the developers toolbox" alongside essential tools
  • The transition from on-premise to cloud forced founders to completely reinvent their technical assumptions and business models
  • AWS didn't just become a place to experiment but convinced enterprises to move all applications due to unprecedented agility and flexibility

Timestamp: [8:17-15:29]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • Amazon Web Services (AWS) - Cloud platform that convinced enterprises to move all applications, providing unprecedented agility and flexibility
  • Snowflake - Data warehousing company that disrupted Oracle's dominance with 10x cheaper, 10x faster cloud-native architecture
  • Oracle and Teradata - Traditional database companies that dominated data warehousing for decades before cloud disruption
  • NetApp and Nimble Storage - Storage companies modernizing enterprise data centers (Nimble Storage was a Lightspeed company)
  • VMware - Software provider for virtualization running on Intel x86 hardware
  • PostgreSQL - Example of essential developer toolbox database
  • Kafka/Confluent - Streaming platform in the developer toolbox
  • ClickHouse - Database company where Lightspeed is an investor
  • Battery Ventures - Where Bucky started his venture career
  • Kleiner Perkins - Where Bucky worked starting January 2018

Technologies/Concepts:

  • LLMs (Large Language Models) - Described as new type of database that developers can query for information
  • Cloud-native architecture - New approach enabled by cloud computing that allowed for 10x improvements
  • Managed database services - How hyperscalers like AWS and Google displaced Oracle in transactional databases
  • On-demand compute and infrastructure resources - Cloud primitives that enabled new types of companies
  • Developer toolbox - Framework for understanding essential infrastructure tools developers rely on

Timestamp: [8:17-15:29]Youtube Icon

๐ŸŽฏ The AI Investment Challenge: Speed and Philosophical Questions

Bucky acknowledges that transitioning into AI as an infrastructure investor was one of the most challenging endeavors of his career, for two primary reasons. First, the pace of change was unprecedented - it felt like "a switch was flipped" and suddenly developers were exclusively focused on AI technologies.

Second, the early AI ecosystem carried massive philosophical questions that made investment decisions extremely difficult. These included fundamental uncertainties like whether there would be one dominant model or many specialized models, whether companies would train their own models or use frontier models, and whether models would be open or closed source.

For over a decade in venture, Bucky had been comfortable with a base set of assumptions that allowed him to research, form conclusions, and act on investment opportunities. AI disrupted this established framework completely, making it extremely difficult to reach confident conclusions. He admits being "quite slow to lean in" and probably missing some great opportunities as a result.

Timestamp: [15:52-17:32]Youtube Icon

๐Ÿ”“ The Open Source Model Revolution

Bucky explains that the market is now speaking clearly about fundamental AI infrastructure needs. At the core level, companies must decide whether to consume proprietary models from companies like OpenAI or Anthropic, or to use open source models. He observes a broad movement toward open source models driven by cost, performance, and flexibility considerations.

When examining the inference calls that AI-native applications make today, an increasing percentage are going toward open source models, typically for lower-stakes use cases where cost, convenience, and control matter most. This trend is becoming increasingly difficult to ignore from an investment perspective.

The shift toward open models creates significant infrastructure requirements. Companies need platforms to run these models (leading to explosive growth in inference platform businesses) and data infrastructure to connect proprietary data to open models for fine-tuning and post-training - capabilities unavailable with closed models.

Timestamp: [17:38-19:08]Youtube Icon

๐Ÿ’ฐ Beyond Cost: The Real Drivers of Open Source Adoption

Mike and Bucky explore the investment implications of the open/closed model divide. While closed model providers like OpenAI and Anthropic handle infrastructure internally or through partnerships with Microsoft and Amazon, open source models create investment opportunities in the supporting infrastructure components that developers must now consider.

However, Bucky clarifies that cost isn't the only driver of open source adoption. Recent dramatic price drops in services like OpenAI's GPT-4, with token prices decreasing by multiple orders of magnitude annually, mean very cheap and performant closed models will remain available.

The real drivers for open source adoption include: better latency properties from smaller models, data sovereignty requirements where information cannot leave specific environments, and the desire for full control over the model. As open source models become more performant, the traditional developer benefits of open source (seen in databases and middleware) become increasingly compelling.

Timestamp: [19:08-20:50]Youtube Icon

๐Ÿ“ฑ On-Device AI: Apple's WWDC Disruption

Mike raises the implications of Apple's recent WWDC announcement, where they revealed a platform in iOS allowing developers to access on-device models with totally free inference. This creates an interesting dynamic in the AI infrastructure landscape.

Bucky explains that Lightspeed has a company called Cartisia working in the on-device space. Currently, many use cases still require the quality and performance of larger models hosted in the cloud - for example, code generation still relies heavily on large frontier models rather than local models due to performance requirements.

However, he sees clear opportunities for on-device models in scenarios where latency is critical and performance is "good enough," such as AI assistants with robust audio and speech capabilities on mobile devices. This represents a discrete market opportunity, particularly because frontier labs seem less interested in this constrained environment - they're focused on building larger models for maximum intelligence rather than shrinking models for specific devices.

Timestamp: [20:50-23:04]Youtube Icon

๐Ÿ‹๏ธ Test-Time Compute and Reinforcement Learning Infrastructure

Bucky introduces an exciting new scaling paradigm in the form of test-time compute and reinforcement learning. When talking to frontier lab teams, they describe being "infrastructure constrained" - not meaning they lack GPUs, but that they need specialized infrastructure to build, maintain, and scale reinforcement learning environments for complex tasks like programming, mathematics, and qualitative tasks with unclear reward models.

The challenge lies in creating environments to simulate real-world scenarios where agents can learn effectively. For example, training an offensive security agent to behave like a hacker requires spinning up servers, placing vulnerabilities on those servers, and creating environments that represent real-world use cases where the agent can learn to find and exploit vulnerabilities.

This creates a complex "ball of wax" around creating arbitrary environments where agents can learn in practical and scalable ways. Bucky sees significant innovation opportunities in building these environments, maintaining them, and ensuring the results from agents learning in these environments produce good outcomes. This represents an entirely new category of simulation infrastructure.

Timestamp: [23:16-24:34]Youtube Icon

๐Ÿ’Ž Key Insights

  • AI infrastructure investing presented unprecedented challenges due to the speed of change and fundamental philosophical uncertainties about model architectures
  • The shift toward open source models is driven by cost, performance, flexibility, latency, and data sovereignty requirements rather than cost alone
  • Open source adoption creates significant investment opportunities in inference platforms and data infrastructure for fine-tuning
  • On-device AI represents a discrete market opportunity that frontier labs are less likely to pursue due to their focus on maximum intelligence
  • Test-time compute and reinforcement learning require entirely new categories of simulation infrastructure for agent training environments
  • Token prices are decreasing by multiple orders of magnitude annually, making closed models increasingly cost-competitive
  • The infrastructure requirements for training specialized agents (like offensive security) involve complex environment simulation and maintenance

Timestamp: [15:52-24:34]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • OpenAI and Anthropic - Examples of closed/proprietary model providers mentioned alongside "the X's"
  • Apple - Announced on-device AI platform at WWDC with free inference
  • Microsoft and Amazon - Partnership examples for closed model infrastructure
  • Cartisia - Lightspeed portfolio company working in on-device AI space
  • Samsung - Mentioned alongside Apple as mobile device manufacturer

Technologies/Concepts:

  • Open source models - Models where companies have access to weights for fine-tuning and control
  • Closed/Proprietary models - Models from companies like OpenAI where infrastructure is handled internally
  • Test-time compute - New scaling paradigm for AI inference and reasoning
  • Reinforcement learning environments - Specialized infrastructure for training agents on complex tasks
  • On-device models - AI models that run locally on mobile devices or laptops
  • Inference platforms - Infrastructure businesses supporting open source model deployment
  • Fine-tuning and post-training - Processes for customizing open models with proprietary data
  • Token prices - Pricing metric for AI model usage that's decreasing rapidly
  • WWDC (Worldwide Developers Conference) - Apple's annual developer conference where on-device AI was announced

Use Cases:

  • Code generation - Example of use case still requiring large frontier models
  • Offensive security/hacking - Example of specialized agent training requiring custom simulation environments
  • AI assistants with audio and speech - Example of on-device AI use case where latency matters

Timestamp: [15:52-24:34]Youtube Icon

๐Ÿงช Reinforcement Learning: The Next Scaling Paradigm

Bucky explains that simulation environments for reinforcement learning represent very fertile ground that Lightspeed is excited about as a firm. He positions reinforcement learning as the next scaling paradigm following pre-training and test-time compute, making the infrastructure around RL critical for enterprise accessibility.

If the industry believes that RL represents the future of AI scaling, then infrastructure that makes reinforcement learning "easy and easily accessible for enterprises" will become essential. This creates significant opportunities for companies building the foundational tools and platforms that enable widespread RL adoption.

The intelligence and entrepreneurial activity in this space reflects its importance, with many smart people in the industry dedicating time to building these capabilities. This represents a foundational shift in how AI systems will be developed and deployed at scale.

Timestamp: [24:41-25:04]Youtube Icon

๐Ÿค– The Agent Infrastructure Explosion

Bucky observes that even in the most recent Y Combinator batch, numerous companies are betting on a future where agents will be ubiquitous, requiring infrastructure, tools, and primitives to function like humans do. This creates entirely new categories of infrastructure needs.

The scope of agent infrastructure requirements is vast, ranging from basic operational needs like which browser an agent should use when accessing the web, to complex integration challenges like connecting to tools on computers and in the cloud, to data pipeline problems like bringing information from third-party applications into the agent.

These represent "really needy problems" that will likely spawn successful and valuable companies. The opportunity extends beyond just training agents to do work - it encompasses providing them with all the tools and infrastructure necessary to execute that work effectively.

Timestamp: [25:04-25:56]Youtube Icon

๐Ÿ” Non-Obvious Agent Infrastructure Opportunities

When Mike asks about less obvious infrastructure opportunities beyond the commonly discussed MCP (Model Context Protocol) and data source connections, Bucky highlights two particularly interesting areas that weren't initially apparent to him.

First, the simulation environments for reinforcement learning that he discussed earlier - while obvious to those inside frontier labs, this need has less exposure outside those organizations, making it a non-obvious but fundamental opportunity.

Second, he identifies a critical challenge around legacy system integration. As AI-native vertical software companies and traditional companies adding AI capabilities need to build agentic flows, they must interact with old, often on-premise software systems that weren't designed to accommodate agents.

Timestamp: [26:03-26:46]Youtube Icon

๐Ÿš› Legacy System Integration: The Long Tail Challenge

Bucky elaborates on the legacy integration challenge by providing specific examples from industries where automation delivers high value. He mentions connecting agents to on-premise Electronic Health Records (EHR) systems in healthcare or logistics management software running on servers in trucking warehouse offices.

These integration points represent critical infrastructure needs for enabling agents to make their full impact in esoteric industries where automation is extremely valuable. The challenge involves teaching agents to engage with technology that has existed for a long time but was never built to accommodate automated interactions.

Using Samsara (a logistics technology company) as an example, Bucky explains how their trucking company customers have very old technology that they want to build automations over. The question becomes how to enable agents to actually engage with that legacy technology in ways that align with these companies' operational needs.

Timestamp: [26:46-27:45]Youtube Icon

๐Ÿ“Š Salesforce's Data Restriction Strategy

Mike and Bucky discuss recent news about Salesforce becoming more restrictive with how companies can access data inside Slack. The policy change specifically targets how companies use Slack data to build AI systems, recognizing the high value of this data and Salesforce's own ambitions to build competitive AI products.

Salesforce has decided to give themselves an inherent advantage by restricting vendors who might be building products competitive to what's on Salesforce's roadmap. This creates a significant precedent that could influence how other major software companies approach data access in the AI era.

The situation presents a potential "Rorschach test" for Salesforce's leverage with customers. The outcome could go one of two ways: either customers will push back, demanding the ability to bring the best technology to bear on their data, or Salesforce's restrictions will be accepted, setting a precedent for other major platforms.

Timestamp: [27:45-28:57]Youtube Icon

๐Ÿ’Ž Key Insights

  • Reinforcement learning represents the next scaling paradigm after pre-training and test-time compute, requiring new enterprise-accessible infrastructure
  • The Y Combinator batch reflects widespread betting on ubiquitous agents, creating vast new infrastructure categories
  • Agent infrastructure needs span from basic operational tools (browsers) to complex integrations (legacy systems and data pipelines)
  • Simulation environments for RL are non-obvious but fundamental opportunities, particularly visible to frontier lab insiders
  • Legacy system integration represents a critical challenge for agents to impact esoteric industries with high automation value
  • Industries with old technology (healthcare EHRs, trucking logistics) need agent-enabled automation solutions
  • Salesforce's Slack data restrictions set important precedents for how major platforms will control AI-valuable data
  • The enterprise response to data restrictions will test the leverage major software companies have over their customers

Timestamp: [24:41-28:57]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • Y Combinator - Recent batch showing high number of agent-focused companies
  • Salesforce - Company implementing new data access restrictions for Slack
  • Slack - Platform with valuable data that Salesforce is restricting access to for AI development
  • Samsara - Logistics technology company serving trucking companies as example of legacy integration challenges
  • Lightspeed - Bucky's firm that's excited about reinforcement learning infrastructure opportunities

Technologies/Concepts:

  • MCP (Model Context Protocol) - Commonly discussed technology for connecting models to data sources
  • Reinforcement Learning (RL) - Next scaling paradigm after pre-training and test-time compute
  • Simulation environments - Infrastructure for training agents through reinforcement learning
  • Agentic flows - Automated workflows that agents execute
  • On-premise EHR systems - Electronic Health Records systems in healthcare that require agent integration
  • Legacy logistics management software - Old systems in trucking warehouses that need agent connectivity
  • Pre-training and test-time compute - Previous scaling paradigms that RL is expected to follow

Industries/Use Cases:

  • Healthcare - Industry with on-premise EHR systems requiring agent integration
  • Trucking and logistics - Industry with legacy warehouse management software needing automation
  • Enterprise software - Both AI-native companies and traditional companies adding AI capabilities

Timestamp: [24:41-28:57]Youtube Icon

โš–๏ธ The Salesforce Precedent: A Test of Platform Power

Bucky expresses concern about the potential precedent Salesforce's data restrictions could set if customers accept the policy without pushback. While he believes what's best for customers typically prevails (making widespread acceptance unlikely), he worries about the implications if other major platforms follow suit.

If Salesforce's restrictive approach succeeds, companies like Atlassian and ServiceNow could quickly implement similar policies, saying "we're going to do that too." This would create a challenging environment for AI application companies that rely on customer data stored in these core systems of record to innovate effectively.

Bucky emphasizes he's watching this situation closely because it could fundamentally alter the landscape for AI innovation, particularly for companies that depend on accessing customer data from established enterprise platforms.

Timestamp: [29:02-29:38]Youtube Icon

๐ŸŒŠ Consumer vs Enterprise Data Wars

Mike draws parallels between Salesforce's Slack restrictions and similar trends on the consumer side, where publishers aren't happy about their content being scraped via RAG (Retrieval-Augmented Generation) systems. Services are emerging to help publishers monetize this usage, and platforms like Reddit have locked their data behind one-off licensing deals.

However, the Salesforce situation represents "the first big shoe to drop" on the enterprise side, with fundamental differences from consumer scenarios. Enterprise businesses typically expect data exchange via APIs as commonplace, making the sudden restriction particularly jarring.

Mike highlights a critical vulnerability for businesses that don't own their data but instead rely on other people's data - they're now in an "uncomfortable spot" wondering if their essential data sources might disappear overnight. This creates new strategic considerations for AI companies about data dependency and ownership.

Timestamp: [29:39-30:41]Youtube Icon

๐Ÿ“ Data Ownership: Complex Questions of Rights and Format

The conversation delves into the complexities of data ownership, comparing Reddit's user-generated content with enterprise platforms like Slack. Bucky notes that while customer data stored in Salesforce should contractually belong to the customer, the comparison with Reddit isn't perfect since Reddit represents a global corpus of user-generated content with unclear ownership.

Mike brings experience from building a large user-generated content platform with "hundreds of billions of hours of audio content," explaining that typically users own their content while platforms have licenses to distribute and monetize on behalf of users. However, Slack presents unique complexities.

The Slack situation is particularly murky because while users write the words, the content is "wrapped in a format that is Slack's own unique proprietary format." This creates ambiguity about where user ownership ends and platform control begins, making it a fascinating test case for data rights in the AI era.

Timestamp: [30:47-31:52]Youtube Icon

๐ŸŒ A New Era of Internet and Enterprise Software

Both Bucky and Mike agree that these developments signal the emergence of a new era for both the internet generally and enterprise software specifically. The data access restrictions and platform control dynamics represent fundamental shifts in how digital infrastructure and business relationships will operate.

Mike concludes that they're "about to see something" - a transformation that will reshape how companies interact with platforms, how data flows between systems, and how innovation happens in the AI-driven economy.

This acknowledgment sets the stage for understanding that the current moment represents an inflection point where established norms around data access, platform openness, and enterprise software integration are being fundamentally reconsidered.

Timestamp: [31:52-32:04]Youtube Icon

๐Ÿ–ฅ๏ธ Beyond "GPU Resellers": Understanding AI Compute

Mike transitions to discussing AI compute, referencing Bucky's involvement in the Together deal at Kleiner and asking about the evolution of startups in this space beyond just Nvidia. Bucky finds AI compute fascinating precisely because it's so misunderstood by the media and broader market.

He takes issue with the common characterization of these companies as mere "GPU resellers," arguing that while technically accurate (they take delivery of GPUs they own or lease, run software on them in data centers, and deliver services to customers), the framing misses the fundamental value proposition.

Bucky poses a provocative question: what's the fundamental difference between companies characterized as "GPU resellers" by publications like The Information and AWS itself? This reframing suggests that AI compute companies may be creating similar foundational value to what AWS created in the early cloud era, despite being dismissed with reductive terminology.

Timestamp: [32:10-32:57]Youtube Icon

๐Ÿ’Ž Key Insights

  • Salesforce's data restrictions could set dangerous precedents if other enterprise platforms like Atlassian and ServiceNow follow suit
  • Enterprise data wars differ from consumer battles because businesses expect API-based data exchange as standard practice
  • Companies without their own data face new vulnerabilities as platforms restrict access to previously available data sources
  • Data ownership in enterprise platforms involves complex questions about user rights versus platform formatting and control
  • The current moment represents a fundamental shift toward a "new era" of internet and enterprise software relationships
  • AI compute companies are mischaracterized as "GPU resellers" when they may be creating foundational value similar to early AWS
  • The distinction between AI compute providers and traditional cloud providers may be less significant than commonly portrayed

Timestamp: [29:02-32:57]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • Salesforce - Company implementing restrictive Slack data policies
  • Slack - Platform with data access restrictions affecting AI development
  • Atlassian and ServiceNow - Core enterprise systems that could follow Salesforce's precedent
  • Reddit - Platform that locked data behind licensing deals in consumer space
  • AWS - Comparison point for understanding AI compute value proposition
  • Nvidia - Referenced as distinct from startup AI compute companies
  • Together - Company involved in Bucky's deal at Kleiner Perkins
  • Lightspeed - Mike and Bucky's current firm, used as example of Slack data ownership
  • Kleiner Perkins - Bucky's previous firm where he worked on Together deal

Technologies/Concepts:

  • RAG (Retrieval-Augmented Generation) - Technology for scraping and using content that publishers oppose
  • APIs - Application Programming Interfaces that traditionally enabled enterprise data exchange
  • GPU resellers - Media term for AI compute companies that Bucky disputes
  • x86 servers - Traditional server hardware that AWS built its business on
  • User-generated content - Content created by platform users with complex ownership questions

Publications/Media:

  • The Information - Publication referenced for characterizing AI compute companies as "GPU resellers"

Timestamp: [29:02-32:57]Youtube Icon

โ˜๏ธ AI Compute: The Misunderstood Cloud Providers

Bucky argues that AI compute companies are fundamentally misunderstood because they're being judged as early-stage cloud providers rather than mature "resellers." AWS has had multiple decades to build higher-level APIs and services on top of their core server offering, while AI compute companies are just beginning their journey toward similar sophistication.

These companies are in the early stages of innovation that transformed AWS from a basic server provider into a comprehensive cloud platform. In the long term, the winning AI compute companies will resemble AWS or Google Cloud Platform more than colocation providers (the pejorative equivalent of "reseller").

The mischaracterization stems from examining these companies at too early a stage in their development, missing the potential for them to evolve into full-stack cloud platforms optimized for AI workloads.

Timestamp: [33:03-33:38]Youtube Icon

๐Ÿ‹๏ธ Training Workloads: The Price-Driven Commodity Market

Bucky explains that AI compute divides into two distinct modalities: training and inference. Training customers are typically frontier labs or well-capitalized companies working in non-overlapping modalities (like audio) with the frontier labs. These customers require large numbers of chips - thousands or tens of thousands - making their primary decision criteria straightforward.

The training market is primarily price-driven, with customers focused on minimizing capex exposure while ensuring chip reliability and meeting SLAs for training job completion without excessive failure. This price sensitivity limits opportunities for software differentiation since companies like SSI and OpenAI buying tens of thousands of chips prioritize cost optimization above other considerations.

Bucky characterizes this modality as "probably a little bit more commodity than not today" across the landscape from hyperscalers to new clouds like CoreWeave or Together AI, though it remains a very fast-growing market where the lion's share of dollars currently reside.

Timestamp: [33:45-34:59]Youtube Icon

โš™๏ธ Custom CUDA Kernels: The Software Differentiation Layer

When Mike asks about startup opportunities in the commoditized training market, Bucky identifies software-driven differentiation as the key opportunity area. He uses Together AI as an example of how companies can create value through custom CUDA kernels - specialized software paths that optimize how models interact with GPU hardware.

CUDA is Nvidia's programming language for GPUs, and kernels represent custom software pathways that make model-GPU interactions faster and achieve higher utilization. Since GPUs are extremely expensive, maximizing utilization delivers significant value to customers.

Together AI's value proposition centers on democratizing the custom kernel expertise that typically exists only within frontier labs like Anthropic and OpenAI. They've assembled talent from Stanford and other key institutions to deliver platform engineering services that were previously exclusive to well-resourced frontier labs.

Timestamp: [35:05-36:18]Youtube Icon

๐Ÿš€ Inference: The Fast-Growing, Software-Rich Opportunity

Bucky describes inference as the second AI compute modality, characterized by rapid growth as more companies move from training models to putting them in production for customer-facing applications. Unlike training, inference is "more online," meaning endpoints are embedded directly in products, creating production software concerns around scaling, uptime, monitoring, and performance optimization.

The inference market presents more sophisticated software challenges, including ensuring model accuracy isn't compromised while optimizing for faster token output. These operational complexities mirror traditional production software challenges but with AI-specific requirements.

Companies like Together AI, Fireworks, and Base 10 are emerging as leaders by solving operational pain points that exist when serving custom or open source models in applications. At frontier labs like OpenAI and Anthropic, dedicated serving teams handle these inference infrastructure challenges to ensure great customer experiences.

Timestamp: [36:18-37:27]Youtube Icon

๐ŸŽฏ The Frontier Lab Strategy: Owning the Full Stack

When Mike asks how much frontier labs want to own of the inference workflow, Bucky explains the strategic divide between open and closed model providers. Companies serving open source models want to own the entire inference stack because that's where infrastructure value creation occurs.

Closed model providers like OpenAI and Anthropic must solve all inference problems internally because their customers expect complete solutions - they want to query LLMs and receive answers without worrying about autoscaling, performance, or operational details.

Frontier labs want their offerings to feel magical and operate behind the scenes, delivering the best possible customer experience. Meanwhile, inference-focused companies aim to enable other organizations to deliver similar experiences to their customers without requiring the same serving expertise that OpenAI, Anthropic, or DeepMind possess.

Timestamp: [37:34-38:37]Youtube Icon

๐Ÿ”ฌ Alternative Architectures: Fertile Ground for Startups

Mike shifts the conversation to new model architectures beyond transformers, asking how much Bucky considers potential architectural innovations as an infrastructure investor. Bucky sees significant opportunity in alternative architectures precisely because frontier labs are heavily committed to scaling transformers through better data curation, more data, and increased compute.

This focus creates white space for startups exploring what might happen if alternative architectures become the preferred path forward, or if blending alternative architectures with transformers unlocks industry breakthroughs that haven't been discovered yet.

The startup opportunity is particularly compelling because alternative architecture research is less compute-bound - companies don't need to compete with frontier labs by purchasing 50,000 or 100,000 H200 GPUs to prove that alternative architectures can work and scale effectively.

Timestamp: [38:43-39:46]Youtube Icon

๐Ÿ’Ž Key Insights

  • AI compute companies are misunderstood early-stage cloud providers, not mere "GPU resellers" - they're following AWS's evolutionary path
  • Training workloads are price-driven and commodity-like, requiring thousands of chips with reliability as the secondary concern
  • Custom CUDA kernels represent the key software differentiation layer in training, democratizing frontier lab expertise
  • Inference is the faster-growing, more software-rich opportunity with production-grade operational requirements
  • Frontier labs must own the full inference stack for closed models to deliver "magical" customer experiences
  • Alternative architectures offer fertile startup opportunities because frontier labs are focused exclusively on scaling transformers
  • Startups exploring alternative architectures have lower compute requirements than transformer-scaling approaches

Timestamp: [33:03-39:46]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • AWS - Comparison point for understanding AI compute evolution and higher-level services
  • Google Cloud Platform (GCP) - Example of mature cloud provider AI compute companies might evolve to resemble
  • CoreWeave and Together AI - Examples of new AI compute clouds in the training space
  • Together AI - Company Bucky worked on at Kleiner, specializing in custom CUDA kernels and training platforms
  • Fireworks and Base 10 - Emerging leaders in the inference space
  • OpenAI, Anthropic, SSI - Frontier labs mentioned as major training workload customers
  • DeepMind - Frontier lab with serving expertise referenced alongside OpenAI and Anthropic
  • Stanford - Source of talent for Together AI's platform engineering team

Technologies/Concepts:

  • CUDA - Nvidia's programming language for GPUs
  • Custom CUDA kernels - Specialized software paths optimizing model-GPU interactions for higher utilization
  • Training vs Inference modalities - Two distinct AI compute workloads with different characteristics
  • Transformers - Current dominant architecture that frontier labs are focused on scaling
  • Alternative architectures - Non-transformer approaches that represent startup opportunities
  • H200 GPUs - High-end Nvidia chips mentioned in context of large-scale training requirements
  • SLAs (Service Level Agreements) - Reliability requirements for training job completion
  • Autoscaling - Production software concern for inference workloads

Infrastructure Concepts:

  • Colocation providers - Traditional data center model used as comparison point
  • Serving teams - Dedicated infrastructure teams at frontier labs handling inference
  • Production software concerns - Scaling, uptime, monitoring, and performance optimization
  • Token output optimization - Speed improvements in model response generation

Timestamp: [33:03-39:46]Youtube Icon

๐Ÿงฌ State Space Models: Verticalization Strategy

Bucky elaborates on alternative architectures by highlighting state space models, particularly through the work of Cartisia. While it's still early to determine if any alternative architecture will reach the ubiquity of transformers, state space models have demonstrated interesting properties for specific use cases.

State space models excel in scenarios requiring long sequence lengths on the input side where you want to pass the model extensive context while maintaining low latency. These characteristics make them particularly well-suited for certain applications where traditional transformers may struggle.

Cartisia has taken a verticalization approach, recognizing these unique properties and focusing on building "the best audio models and the best platform for developing audio agents and voice agents in the market." This represents one potential path for alternative architectures - finding specific domains where they excel and building focused solutions rather than competing directly with general-purpose transformers.

Timestamp: [39:52-40:39]Youtube Icon

๐Ÿš€ The Nonlinear Scaling Question

Beyond verticalization, Bucky identifies another potential path for alternative architectures: the possibility that one of these approaches (whether state space models or others) could "pay off in like a nonlinear way" that justifies raising substantial capital to scale them to the levels where current LLMs are being pushed.

This represents a fascinating unanswered question for infrastructure investors - determining whether alternative architectures could suddenly demonstrate breakthrough performance that warrants massive scaling investments, potentially competing directly with transformer-based approaches.

The uncertainty around this question makes it particularly interesting from an investment perspective, as it could represent either massive opportunity or misallocated capital depending on how these alternative approaches develop.

Timestamp: [40:46-41:05]Youtube Icon

๐ŸŽจ Cross-Modal Innovation: Diffusion Meets LLMs

Bucky highlights an exciting trend of cross-pollination where techniques like diffusion, which became popular for image and video generation, are now being applied to LLMs. This cross-modal innovation creates "more exciting range of outcomes" and benefits the entire industry.

The experimentation across different modalities ultimately serves users of AI technology, as the diversity of approaches guarantees that developers and consumers will get the best possible products. This competitive dynamic drives innovation beyond any single architectural approach.

Bucky views this experimentation as fundamentally positive for the industry, creating healthy competition and innovation that ultimately benefits end users through better AI products and capabilities.

Timestamp: [41:05-41:28]Youtube Icon

๐Ÿ’ผ The State of Venture: Conventional Wisdoms Proving True

Mike asks Bucky for his perspective on venture capital's evolution, particularly around AI, asking about winners, losers, and outlook for current fund vintages. Bucky responds by acknowledging several conventional wisdoms that he believes are actually proving true.

First, companies are staying private longer, leading them to raise more capital in private markets with more returns generated while still private. Second, the scale of AI opportunities exceeds anything previously seen, meaning these companies will require significantly more investor capital than historical companies.

As a consequence of both larger opportunity scale and increased capital requirements, Bucky expects more capital flowing into these companies with larger end outcomes. This could lead to venture returns at an industry level that "really start to look a lot better even than they have been in the past."

Timestamp: [41:35-42:40]Youtube Icon

๐ŸŒŸ Trillion-Dollar Companies: The New Paradigm

Bucky expresses his belief that the industry will witness trillion-dollar companies going public as multi-trillion-dollar companies "for the first time." This represents a fundamental shift in the scale of venture outcomes, reflecting the unprecedented scope of AI opportunities.

This prediction underscores his conviction that AI represents a fundamentally different category of technological transformation, one that will generate returns and company valuations that exceed historical precedents in venture capital.

The statement cuts off mid-sentence but clearly positions AI as creating a new paradigm for venture returns and company scale, suggesting that traditional frameworks for understanding venture outcomes may need to be reconsidered given the magnitude of AI opportunities.

Timestamp: [42:40-42:46]Youtube Icon

๐Ÿ’Ž Key Insights

  • Alternative architectures offer less capex-intensive innovation opportunities compared to scaling frontier transformers
  • State space models demonstrate unique advantages for long sequence, low-latency use cases, leading to verticalization strategies
  • The question remains whether alternative architectures could achieve nonlinear breakthroughs justifying massive scaling investments
  • Cross-modal technique transfer (like diffusion to LLMs) drives healthy innovation competition across the industry
  • Conventional venture wisdom about longer private periods and larger capital requirements is proving accurate for AI
  • AI opportunities represent unprecedented scale, requiring more investor capital than historical companies
  • The venture industry may see significantly improved returns due to larger AI company outcomes
  • Trillion-dollar companies may go public as multi-trillion-dollar entities for the first time in venture history

Timestamp: [39:52-42:46]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • Cartisia - Company known for advancing state space models and building audio/voice agent platforms
  • Battery Ventures - Bucky's first venture firm
  • Kleiner Perkins - Bucky's previous firm
  • Lightspeed - Bucky's current firm, described as a large platform

Technologies/Concepts:

  • State Space Models (SSM) - Alternative architecture with advantages for long sequence, low-latency use cases
  • Transformers - Current dominant architecture that alternative approaches are compared against
  • Diffusion techniques - Methods popular in image/video generation now being applied to LLMs
  • Long sequence lengths - Input characteristic where state space models demonstrate advantages
  • Cross-modal innovation - Transfer of techniques between different AI modalities (image, video, text)
  • LLMs (Large Language Models) - Referenced in context of scaling and cross-modal technique application

Venture Capital Concepts:

  • Private market capital - Increased funding occurring before companies go public
  • Venture returns - Industry-level performance that may improve due to AI opportunities
  • Multi-trillion dollar valuations - Unprecedented company scale Bucky predicts for AI companies
  • Fund vintages - Referenced in context of current venture fund performance outlook

Timestamp: [39:52-42:46]Youtube Icon

๐Ÿ—๏ธ Large Platform Advantage: The Right Side of History

Bucky explains why large platforms like Lightspeed have advantages in the AI era due to their "chip stack" - the ability to capitalize companies throughout their entire lifecycle in ways that smaller firms cannot. This comprehensive support capability becomes crucial as AI companies require unprecedented amounts of capital and long-term partnership.

Being "on the right side of history" means positioning at large platforms that can meet ambitious AI founders where they are and serve as their "partner of record all the way through." This end-to-end support provides significant benefits for both founders and investors navigating the complex AI landscape.

The platform advantage stems from the ability to provide hundreds of millions to billions in investment over time, combined with global reach and resources that smaller firms simply cannot match.

Timestamp: [42:53-43:17]Youtube Icon

๐ŸŽฏ The Bifurcation Reality: Scale vs Specialization

Bucky identifies "increasingly clear room for specialists" - investors focusing on specific stages (like formation-stage AI companies) or verticals (vertical AI companies). This specialization represents another viable path for differentiation as the industry bifurcates.

The bifurcation occurs because founders now have unprecedented options, allowing them to optimize their investor selection based on specific needs. A formation-stage founder can choose between a large platform offering comprehensive long-term support or a specialist who focuses exclusively on early-stage AI company building.

Bucky emphasizes that this isn't the first time "bifurcation" has been discussed in venture, but the current constraints make it "more of an inevitability than ever before." He believes scale and specialization are now the only two paths to meaningful differentiation in the industry.

Timestamp: [43:17-44:36]Youtube Icon

๐Ÿ‘ค Individual Brands: The New Industry Dynamic

Bucky observes that the venture industry's online nature has shifted focus toward individual partners rather than just firm brands. While firm brands retain importance, individual reputations, expertise areas, and entrepreneur relationships carry significant weight in investment decisions.

This trend creates accountability for General Partners, making it "impossible for GPs to hide behind their firm's reputations." Success now requires being genuinely good investors, partners, and human beings with high integrity toward founders.

Bucky views this as positive for industry governance and investor behavior, forcing GPs to build strong reputations through good work and high-integrity relationships with founders rather than relying solely on institutional brand recognition.

Timestamp: [44:42-45:23]Youtube Icon

๐Ÿš€ Trillion-Dollar Company Categories

When Mike asks what trillion-dollar companies will look like, Bucky identifies several clear categories. First, "whoever gets to win the AGI race is pretty obviously a trillion dollar company," with OpenAI currently holding pole position for delivering AI to consumers in the most novel and dominant way.

Second, the company that owns code generation and builds the most performant frontier model for software engineering agents represents another trillion-dollar opportunity - potentially Anthropic based on current positioning.

Third, vertically integrated companies targeting massive industries like space or defense could achieve trillion-dollar scale simply due to the enormous GDP of these sectors. SpaceX exemplifies this approach and "seems well on its way" to demonstrating this potential.

Foundation model labs that establish market-leading positions in enterprise or consumer segments through finding the right market seam represent the core trillion-dollar opportunity in Bucky's view.

Timestamp: [45:23-46:33]Youtube Icon

๐ŸŒ Systemic vs Unique Moment: The Big Question

Bucky poses a fundamental question about whether the current environment will systematically produce more trillion-dollar companies or represents a unique moment that will see "five or six of them" before returning to previous norms where $20 billion companies were considered exceptional outcomes.

While acknowledging uncertainty, he leans toward believing something systemic is occurring due to the "confluence of AI" and other fundamental capabilities like ubiquitous space travel that previously seemed science fiction but are now within reach.

What excites Bucky most isn't just the current set of companies appearing to reach trillion-dollar scale, but the platform effect they'll create. Once these companies become household names and ubiquitous platforms for entrepreneurs to build upon, "the power of these technologies is just so unparalleled" that entirely new categories will emerge.

Timestamp: [46:39-47:30]Youtube Icon

๐Ÿค– AI Agent Economics: Beyond Labor Automation

Bucky concludes with a vision of AI agent companies worth a trillion dollars, not merely for automating existing labor spend but for "creating like entirely new economies around the work that they can do that humans weren't capable of." This represents the ultimate expression of AI's transformative potential.

The economic opportunity extends beyond replacing human work to enabling entirely new categories of value creation and economic activity that were previously impossible. This perspective suggests that the trillion-dollar companies of the AI era will be fundamentally different from previous technology giants.

This vision encapsulates Bucky's core thesis about AI's unprecedented economic potential and why the current moment may indeed be systemic rather than a unique historical anomaly.

Timestamp: [47:30-47:42]Youtube Icon

๐ŸŽ™๏ธ Podcast Conclusion

Mike thanks Bucky for the conversation, expressing that he feels "way smarter about the state of Infra and AI and venture" and believes listeners and viewers will feel similarly. The conversation concludes with Mike providing information about how to follow Lightspeed and subscribe to Generative Now.

The podcast is produced by Lightspeed in partnership with Pod People, with Mike Mignano as host, and promises to return next week with new content.

Timestamp: [47:42-48:11]Youtube Icon

๐Ÿ’Ž Key Insights

  • Large venture platforms have "chip stack" advantages in capitalizing AI companies throughout their entire lifecycle
  • The venture industry is bifurcating into two viable paths: large-scale platforms and specialized boutique firms
  • Individual partner brands and reputations now carry more weight than pure firm branding in investment decisions
  • Trillion-dollar company categories include AGI winners, code generation leaders, and vertically integrated companies in massive industries
  • The fundamental question is whether multiple trillion-dollar companies represent a systemic shift or unique historical moment
  • AI agent companies may create entirely new economies beyond just automating existing human labor
  • Current AI technologies have "unparalleled power" that will enable platform effects and new company categories
  • Founders now have unprecedented options for optimizing their investor selection based on specific needs

Timestamp: [42:53-48:11]Youtube Icon

๐Ÿ“š References

Companies/Products:

  • Lightspeed - Bucky's current firm, described as having a "chip stack" for comprehensive AI company support
  • OpenAI - Positioned as currently having pole position in the AGI race and consumer AI delivery
  • Anthropic - Suggested as potentially leading in code generation and software engineering agent models
  • SpaceX - Example of vertically integrated company "well on its way" to trillion-dollar scale in space industry
  • Pod People - Production partner for Generative Now podcast

People:

  • Michael Mignano (Mike) - Host of Generative Now podcast, partner at Lightspeed
  • Bucky Moore - Guest, partner at Lightspeed specializing in infrastructure and AI investing

Concepts:

  • AGI (Artificial General Intelligence) - Race that Bucky believes will produce trillion-dollar companies
  • Code generation - Specific AI capability area that could drive trillion-dollar outcomes
  • Chip stack - Term for comprehensive venture platform capabilities across company lifecycle
  • Formation stage - Early startup phase where specialized investors can differentiate
  • Vertical integration - Strategy for companies targeting massive industries like space and defense
  • AI agents - Future companies that may create entirely new economies beyond labor automation
  • Bifurcation - Industry trend toward either large platforms or specialized boutique firms

Media/Platforms:

  • Generative Now - Podcast series hosted by Mike Mignano
  • LightseedVP - Lightspeed's handle on X, YouTube, and LinkedIn
  • X, YouTube, LinkedIn - Social platforms where listeners can follow Lightspeed

Timestamp: [42:53-48:11]Youtube Icon