undefined - Startups Ideas You Can Now Build With AI

Startups Ideas You Can Now Build With AI

There's never been a better time to start an AI company. Not just because there are new ideas, but because the tech finally makes old ones actually work.On the Lightcone, Garry, Harj, Diana, and Jared talk through the kinds of startups that are suddenly viable thanks to LLMsโ€”from full-stack law firms to personalized tutors to recruiting platforms that can finally scale. They share the patterns they're seeing, the ideas they're excited about, and what it means to live at the edge of the future, w...

โ€ขMay 16, 2025โ€ข40:46

Table of Contents

00:43-09:45
09:50-17:37
17:42-25:08
25:15-32:25
32:31-40:34

๐Ÿš€ The Million-Token Era

The conversation opens with excitement about the revolutionary capabilities of new AI models, particularly highlighting the massive context window now available with Gemini 2.5 Pro.

"We have an incredible number of new startup ideas, some of which are actually very old, and they can only happen right now."

This introduction sets up the central theme that we're at an inflection point where previous startup ideas that couldn't work before are suddenly viable because of AI breakthroughs.

Timestamp: [00:43-01:08] Youtube Icon

๐Ÿ” Recruiting Startups Reimagined

Harj shares his personal experience with Triple Bite, a recruiting startup he ran for almost 5 years, to illustrate how AI is transforming previously challenging business models.

"There was a period of time when we started Triple Bite around 2015 where recruiting startups were kind of like a really popular type of startup."

He explains that the excitement back then stemmed from applying marketplace models to recruiting, but they faced significant challenges. Triple Bite's approach required building a curated marketplace that evaluated engineers and provided detailed data about candidates.

"We had to spend years essentially building our own software to do thousands of technical interviews to squeeze out every little data point we could... so that we'd effectively build up this labeled data set that we could run machine learning models on."

The complexity of their operation required a three-sided marketplace: companies hiring engineers, engineers looking for jobs, and contracted engineers to conduct the interviews - making the business model extremely challenging.

Timestamp: [01:08-02:39] Youtube Icon

โšก AI-Powered Evaluation

Harj explains how AI, particularly code evaluation models, has transformed what's possible in the recruiting space.

"All of the evaluation piece of it, at least now with AI, is very, very possible. I mean, we can specifically with the AI codegen models, you can do code evaluation."

He highlights Meror as a hot AI startup following a similar concept to Triple Bite - a marketplace for hiring software engineers - but with a crucial difference enabled by AI.

"I think what AI has unlocked for them is the evaluation piece of it. They could just do [it] on day one using LLMs. They didn't need to build up this big labeled data set."

This technology advantage allowed Meror to quickly expand beyond engineers to analysts and other knowledge workers - something that would have taken Triple Bite years since they would have needed to rebuild their labeled datasets for each new category.

"With LLMs, you can just do that on, you know, day one effectively."

This represents a fundamental shift in what's possible, making the recruiting startup space much more exciting than it was five years ago.

Timestamp: [02:39-03:44] Youtube Icon

๐Ÿงฉ Marketplace Transformations

Garry builds on Harj's insights, suggesting broader implications for marketplace businesses across industries.

"That's a very powerful prompt for anyone listening - what are marketplaces that are three-sided or four-sided marketplaces that suddenly become, you know, two or three-sided?"

He also points to existing two-sided marketplaces like Duolingo that are now "under fire" as they begin to replace human interactions with AI, specifically using AI for language conversation partners.

The takeaway is clear: founders should be examining virtually any marketplace and asking how LLMs might transform its structure and operations.

Timestamp: [03:44-04:15] Youtube Icon

๐Ÿง  Overcoming Past Failures

Harj discusses the psychological barriers to entering spaces where previous startups have failed despite significant investment.

"There's also just a psychological element as a founder to when you enter into a space where there's been lots of smart teams and lots of capital that's flown into it."

He notes that recruiting startups like Triple Bite ($50M raised) and Hired (over $100M raised) collectively attracted hundreds of millions in funding but "overall as a category did not do particularly well."

"Going in, you face a lot of skepticism if you're going to go out and pitch investors for an idea, even when you have the like, 'Well, LLMs change everything.'"

He emphasizes that founders need to push through cynicism from investors who have lost money on similar ideas in the past.

Timestamp: [04:15-05:10] Youtube Icon

๐Ÿ”„ The Instacart Parallel

Garry draws a parallel to Instacart's success despite the cautionary tale of Webvan's massive failure in online grocery delivery.

"Instacart was that story exactly. Webvan was sort of this rotting corpse of a startup just hanging in that doorway, and most people looked at that and said, 'Oh man, I don't want to walk near that.'"

What made the difference was a fundamental technology shift: the widespread adoption of smartphones enabled a mobile marketplace that wasn't possible during Webvan's era.

"Simultaneously, the iPhone and Android phones were everywhere, and you could have a mobile marketplace for the first time."

This historical example reinforces why the current moment with AI is so exciting - all the walls in the "idea maze" have shifted, creating new pathways to success in previously challenging spaces.

Timestamp: [05:10-05:45] Youtube Icon

๐Ÿ“ฑ Technology Unlocks

The conversation deepens the Instacart/Webvan comparison, highlighting how fundamental technology shifts enable previously failed business models to succeed.

"The big technology unlock for Instacart was the fact that everyone had a phone now. It enabled the Webvan model to actually work for the first time."

This pattern is repeating with LLMs transforming recruiting companies and numerous other business categories. The key insight is that breakthrough technologies don't just improve existing models - they fundamentally change what's possible.

Timestamp: [05:45-06:10] Youtube Icon

๐Ÿค– Targeted AI Applications: Technical Screening

The discussion shifts to how focusing on specific pain points within larger markets can create valuable opportunities, using the example of Apriora, a YC-funded company.

"This is company called Apriora that Nico, the other GP here at YC, funded back in Winter '24, and their whole premise is to build AI agents that run the screening for technical interviews."

The speakers highlight that technical screening is a significant pain point for engineering teams, where:

"A lot of engineers spend a lot of time just doing a bunch of interviews, and the pass rate is so tiny. When I used to run engineering teams at Niantic, all that pre-screening was just so much work. The engineers hate doing it."

By focusing specifically on this one challenging aspect of the recruiting process, Apriora found success with large companies.

Timestamp: [06:10-06:58] Youtube Icon

๐Ÿ“ˆ Market Expansion Through Sophistication

Harj explains how AI allows companies like Apriora to expand their target market by enabling more sophisticated evaluations.

"There are plenty of technical screening products pre-Apriora, but you could only use them to do fairly simple evaluations to like, weed out people who weren't engineers at all effectively, or very, very junior."

The key advancement is that AI-powered products can now perform more nuanced evaluations that work for senior candidates as well.

"With LLMs, you can do more sophisticated evaluations to kind of get more nuanced levels of screening. And so suddenly now companies will be like, 'Oh, actually I could give this to not just like my international applicants or my college students. I'll just give it to like senior engineers who are applying.'"

This capability significantly expands the addressable market for technical screening tools.

Timestamp: [06:58-07:36] Youtube Icon

๐ŸŽ“ The Holy Grail of EdTech: Personalization

The conversation transitions to education technology, where hyperpersonalization has been a persistent challenge.

"That aspect of doing hyperpersonalization is one of the holy grails where [it] has been difficult for edtech companies to crack, right? Because every student as they go through their learning journey - everyone is very unique and knows different things."

Harj expresses excitement about the longstanding dream of personalized education finally becoming possible.

"For as long as I can remember, the internet's been around, like one of the dreams of it was that everyone now would have access to personalized learning and knowledge, and we'd all just have these great intellectual tools to learn anything."

He notes that while the internet has made learning easier, we've never truly had personalized learning or "a personalized tutor in your pocket" - until now with AI.

Timestamp: [07:36-08:30] Youtube Icon

๐Ÿ“š Innovative EdTech Examples

The speakers highlight two YC-funded companies successfully applying AI to education:

First, Revision Dojo helps students with exam preparation.

"[It's] sort of the version of flashcards, but not like the janky, just like boring going through content, but the version that actually students like and gets tailored for their journey."

They note that Revision Dojo has attracted "a lot of DAUs and a lot of power users," indicating strong product-market fit.

The second example is Adexia, which creates tools for teachers to grade assignments - addressing another pain point similar to the technical screening issue in recruiting.

"Adexia does tools for teachers to grade their assignments, which is another example of work that is not people's main job but is this other thing that they have to do."

"There's like a lot of studies that show that the biggest reason that teachers churn out of the workforce is that they hate grading assignments. It's just like no fun at all."

These examples demonstrate how AI can transform both learning and teaching by automating tedious aspects of education.

Timestamp: [08:30-09:33] Youtube Icon

๐Ÿซ EdTech Adoption Challenge

The conversation concludes with an observation about the adoption patterns of these new educational technologies.

"One of the interesting trends for some of this stuff is that it's private schools who are actually much more nimble."

This raises a question about how to bring these innovations to the public education system, where they may be needed most.

"I'd be curious what policy changes we need to make to actually support this in public schools, because the public schools need it the most actually."

This highlights a broader challenge beyond the technology itself - how to ensure innovations reach the institutions and populations that could benefit most from them.

Timestamp: [09:33-09:45] Youtube Icon

๐Ÿ’Ž Key Insights

  • AI, particularly large language models with massive context windows, is enabling previously failed startup ideas to finally become viable
  • Recruiting marketplaces that required complex human evaluation can now leverage AI for faster, more scalable assessment
  • Three or four-sided marketplaces can potentially be simplified to two-sided marketplaces with AI replacing human intermediaries
  • The pattern resembles how mobile technology enabled Instacart to succeed where Webvan failed - fundamental technology shifts create new possibilities
  • Focusing on specific pain points within larger markets (like technical screening) can be an effective entry strategy
  • AI enables more sophisticated evaluations that expand addressable markets beyond entry-level candidates
  • Personalized education - the "tutor in your pocket" dream - is finally becoming possible through AI
  • Solutions that eliminate tedious work that professionals hate (like grading for teachers) represent significant opportunities
  • Institutional adoption challenges remain, with private schools moving faster than public education systems

Timestamp: [00:43-09:45] Youtube Icon

๐Ÿ“š References

Companies:

  • Triple Bite - Harj's recruiting marketplace startup that raised ~$50M
  • Hired - Competitor to Triple Bite that raised over $100M
  • Meror - Current AI-powered recruiting startup mentioned as a hot company
  • Webvan - Failed online grocery delivery company from the dot-com era
  • Instacart - Successful grocery delivery company that succeeded where Webvan failed
  • Duolingo - Language learning app mentioned as potentially replacing human interaction with AI
  • Apriora - YC-funded company building AI agents for technical interview screening
  • Niantic - Company where one of the speakers previously ran engineering teams
  • Revision Dojo - YC-funded company creating personalized exam prep solutions
  • Adexia - YC-funded company building AI tools to help teachers grade assignments

People:

  • Harj - YC partner who previously ran Triple Bite
  • Garry - YC partner participating in the discussion
  • Nico - YC GP mentioned as having funded Apriora
  • Jared - YC partner mentioned in relation to Adexia

Technology Concepts:

  • LLMs (Large Language Models) - Core AI technology enabling new startup possibilities
  • Million token context window - Referenced as a capability in Gemini 2.5 Pro
  • Labeled data sets - Mentioned as previously required for machine learning models
  • AI codegen models - Specifically referenced for code evaluation capabilities
  • Hyperpersonalization - Described as the "holy grail" for edtech

Timestamp: [00:43-09:45] Youtube Icon

๐ŸŒ Distribution vs Product Quality

The conversation shifts to a crucial question about AI-powered startups: do better products automatically get better distribution?

"It's clearly possible to build much better products with LLMs. If we take the learning apps for example, they can go far beyond anything you could do for personalized learning pre-LLMs. But it doesn't necessarily mean that you instantly get more distribution, especially if you're going after the consumer market."

This raises a fundamental business challenge - even with revolutionary AI capabilities, startups still face the age-old challenge of gaining users and market share.

Timestamp: [09:50-10:17] Youtube Icon

๐Ÿ’ฐ The Economics of AI Intelligence

The discussion turns to the changing economics of AI and how this impacts business models.

"Intelligence is much cheaper. It's quite a bit cheaper than it was last year, but it's still enough that you have to charge for it probably."

The speakers observe promising trends that could eventually make AI more accessible and affordable:

"Distillation from bigger models to smaller models is working. It seems clear that the mega giant models are teaching even the production model size of today to be smarter. The cost of intelligence is coming down quite significantly."

This cost trajectory suggests exciting possibilities for consumer applications that have previously been cost-prohibitive.

"Consumer AI, it finally might be here soon. The thing to track is: how smart is it such that any given user incrementally only costs, I don't know, pennies or like 10 or 15 cents? Then it becomes so cheap that you will just have intelligence for free."

Timestamp: [10:17-11:21] Youtube Icon

๐Ÿ’ธ Return to the Freemium Model

As AI costs decrease, the speakers suggest we might see a revival of the freemium business model that was popular in the Web 2.0 era.

"Maybe it'll be a return to the premium model that we got used to during Web 2.0, this idea that you could basically give away your product and then for 5 or 10% of those users, there are things that they so want that you're going to sell them a $5 or $10 or $20 a month subscription."

OpenAI is cited as an example already implementing this approach, as is the education startup Study with 2DS, which is seeing significant success.

"On average the kids who use that actually get on grade level or can kind of go up even a couple grade levels. Those are real outcomes for students."

The speakers express optimism that as costs continue to decrease, this model could enable massive scale.

"Right now you still got to pay for it, but maybe not for a while. And that's actually a really big unlock. That's the moment where you could have 100 million or a billion people using it."

Timestamp: [11:21-12:15] Youtube Icon

๐Ÿ—ฃ๏ธ Speak: Pre-LLM Success Story

The conversation highlights Speak as an example of an EdTech company that was ahead of the curve on personalization.

"Speak is this company that got started couple of years ago before LLMs were a thing at all. It was team of researchers that really believed that you could personalize language learning, which might have been a bit contrarian back then because Duolingo seemed to be the game in town that was winning."

The company focused intensely on personalizing language learning and found initial traction in Korea among English learners. When GPT-3 and 3.5 were released, they recognized the opportunity and doubled down.

"When GPT-3 and 3.5 were coming out, they saw that 'wow, this is going to be the moment.' They double down and they've been on this trajectory now with lots of MAUs that's really working out."

This example illustrates how companies with the right foundational thesis can leverage AI advancements to accelerate their growth.

Timestamp: [12:15-13:21] Youtube Icon

๐Ÿ’ต Value-Based Pricing in Consumer AI

The discussion shifts to how AI enables value-based pricing, drawing parallels between enterprise and consumer markets.

"We've seen a lot with the startups that are selling to enterprises or companies about how the budgets become so much bigger when companies stop thinking about you as software-as-a-service, but they start thinking about you as replacing their customer support team or their analytics team. They'll just pay way, way, way more."

This same principle can apply to consumer products, particularly in education.

"If you think about a personalized learning app, often EdTech companies struggle with who's actually the buyer and who's going to pay. If you go for younger children, you've got to get the parents to pay, but the parents aren't going to pay that much for an app that their kids don't retain or complete."

The key insight is that AI can transform the perceived value of the product:

"We know that parents will definitely pay for human tutors. That's actually probably quite a big market. If your app goes from being a self-study course that doesn't get any completion to actually being on par with the best human math tutor for your 12-year-old, parents will pay a lot more for that."

This value transformation means companies might not need massive scale to build substantial businesses.

"You don't necessarily need millions of parents using it, but even a hundred thousand parents using it paying you a significant amount means you now have a much bigger business than was possible before."

Timestamp: [13:21-14:39] Youtube Icon

๐Ÿฐ Beyond AI: Building Defensible Moats

The conversation turns to the importance of building defensible business moats, even with AI-powered products.

"It's pretty clear a company like Speak or almost any of these other companies that could have durable revenue streams - what you need is brand, you need switching costs, sometimes it's integration with other technologies that are sort of surrounding that experience."

In educational contexts, this might mean integration with platforms like Clever for authentication. The speakers emphasize that simply incorporating AI isn't enough.

"Sam Altman has talked about this bunch. It's not enough to drop AI in it. You still have to actually build a business."

They note that OpenAI itself is supportive of startups building on their API:

"I don't think OpenAI is necessarily out to get all the startups. I actually think that on the API side, they very much hope that a lot of them do really, really well."

However, they also acknowledge OpenAI's growing interest in applications, evidenced by hiring the former Instacart CEO.

Timestamp: [14:39-15:35] Youtube Icon

๐Ÿ“ฑ Big Tech's AI Integration Challenges

The conversation addresses the curious phenomenon of large tech companies not fully leveraging AI capabilities in their products.

"Open AI is highly likely to be a trillion dollar company at some point and, you know, as powerful as a Google or an Apple or any of them. The interesting thing right now is they're still on the come-up, and if anything, the big tech platforms are actually still holding back a lot of the AI labs."

The most striking example mentioned is Apple's voice assistant:

"The most profound example of this is why is Siri still so dumb? It makes no sense."

This observation sets up the discussion for the next topic on platform neutrality.

Timestamp: [15:35-16:09] Youtube Icon

โš–๏ธ The Case for Platform Neutrality

The final section of the conversation advocates for platform neutrality as a necessary condition for innovation in AI.

"That points to something that we actually really need in tech today. We actually really need platform neutrality."

The speaker draws parallels to previous tech policy battles:

"In the same way 20, 30 years ago, there were all these fights about net neutrality, this idea that there should be one internet, that ISPs or big companies should not self-preference their own content or the content of their partners. That's what sort of unleashed this giant wave of really a free market on the internet."

Windows is cited as another example where government intervention created space for competition:

"If you open up Windows, you actually have to choose your browser, and then you also need to be able to choose which search engine you use. These are things that the government did get involved in and said, 'Hey, you cannot self-preference in this way.'"

The speaker argues this created the conditions for Google's rise:

"If you remember the moment where Internet Explorer had a majority of web users, that could have been a moment where Google couldn't have become what it became."

They conclude by suggesting similar principles should apply to voice assistants on smartphones:

"Why doesn't this exist for voice on phones? You should be able to pick. You shouldn't be forced to use Google Assistant. You shouldn't be forced to use Siri."

Timestamp: [16:09-17:37] Youtube Icon

๐Ÿ’Ž Key Insights

  • Better AI products don't automatically gain distribution; startups still need effective go-to-market strategies
  • The cost of AI intelligence is decreasing rapidly, potentially enabling free or freemium consumer AI products
  • As AI reaches human-level quality in specific domains, it enables value-based pricing similar to human services
  • Consumer AI businesses can achieve significant revenue with smaller user bases by charging premium prices for truly valuable capabilities
  • Successful AI companies still need traditional defensible moats: brand, switching costs, and integrations
  • OpenAI aims to be platform-like for startups while also exploring direct applications
  • Major tech platforms are still underutilizing AI capabilities in their core products
  • Platform neutrality (like net neutrality) may be necessary to create fair market conditions for AI innovation
  • Previous government interventions in tech (browser choice, search engine choice) created conditions for competition
  • Voice assistant choice on smartphones represents a current area where platform neutrality is lacking

Timestamp: [09:50-17:37] Youtube Icon

๐Ÿ“š References

Companies:

  • OpenAI - Mentioned as potentially becoming a trillion-dollar company and their approach to the API and application markets
  • Speak - EdTech company focused on personalized language learning that was founded before LLMs and thrived with their arrival
  • Duolingo - Referenced as the dominant language learning app that Speak was competing against
  • Study with 2DS - Education startup mentioned as having success with AI-powered learning
  • Instacart - Mentioned because OpenAI hired their CEO for applications
  • Clever - Educational technology platform mentioned for authentication integration
  • Google - Referenced in discussions about search engines, platform neutrality, and Google Assistant
  • Apple - Mentioned regarding Siri and platform control

People:

  • Sam Altman - Cited for his perspective that "it's not enough to drop AI in it, you still have to actually build a business"

Technology Concepts:

  • LLMs (Large Language Models) - Core technology enabling new product capabilities
  • Freemium/Premium Model - Business approach from Web 2.0 era potentially making a comeback with AI
  • Platform Neutrality - Concept advocated for AI similar to net neutrality for internet
  • Net Neutrality - Historical comparison to current platform neutrality needs
  • Model Distillation - Process of transferring knowledge from larger to smaller models
  • MAUs (Monthly Active Users) - Metric mentioned regarding Speak's success

Historical References:

  • Web 2.0 - Referenced regarding freemium business models
  • Internet Explorer dominance - Cited as a moment when government intervention enabled competition
  • Windows browser choice - Example of government-mandated platform neutrality

Timestamp: [09:50-17:37] Youtube Icon

๐Ÿข Google vs OpenAI: The Usage Gap

The conversation opens with an interesting observation about the surprising disparity between Google's Gemini and OpenAI's ChatGPT usage despite Google's technical capabilities.

"I saw some numbers recently about how Gemini Pro models, like just their usage particularly from consumers, is just an insignificant fraction of ChatGPT's."

This gap appears especially puzzling given that YC's internal work has found Gemini 2.5 Pro to be highly competitive:

"We've been doing our own internal work building agents and actually being at the cutting edge of a lot of the AI tools, and we found that Gemini 2.5 Pro is like as good and in some cases a better model than GPT-4 for various tasks."

The speakers note that this technical parity hasn't translated to consumer awareness or adoption, despite Google already having billions of users through their existing products.

"That hasn't trickled down into public awareness yet, which is fascinating since Google already has all the users with their phones."

Timestamp: [17:42-18:17] Youtube Icon

๐ŸŽฏ First-Mover Advantage in AI

The discussion continues by exploring the advantages of being first to market in the AI space.

"OpenAI is not a startup anymore, but relative to Google, it essentially is. So there is clearly some sort of intangible moat around being the first in a space and sort of staking your claim as like the best product for a specific use case."

The speakers suggest that being perceived as the leader can be more important than having objectively superior technology:

"At some point, maybe it doesn't even necessarily need to be like objectively the best. It just needs to be good enough."

This observation points to the challenges larger tech companies face in competing with more focused AI startups, even when the larger companies have technical parity or advantages.

Timestamp: [18:17-18:46] Youtube Icon

๐Ÿ”„ Big Tech's AI Integration Struggles

The conversation turns to the challenges big tech companies face in effectively integrating AI into their products.

"That's the bet that I think a lot of the big tech companies are trying and failing at. Microsoft has a co-pilot built into Windows now that is still quite inferior to anything OpenAI puts out."

Despite having strong underlying models, the speakers express frustration with how these capabilities are implemented in consumer products:

"A lot of the Gemini integrations into Gmail or Google Drive are not... they're totally useless. It's like, is there someone at the wheel over there? I don't get it."

The speakers suggest that the issue may stem from organizational complexity rather than technical limitations.

Timestamp: [18:46-19:21] Youtube Icon

๐Ÿงฉ Google's Organizational Challenges

The discussion delves into specific organizational issues at Google that may be hampering their AI efforts.

"There's actually two different products. There's Gemini where you can consume Gemini and Vertex Gemini, and I think they're like different orgs. I think it's suffering a little bit from being too big of a company and essentially shipping the org."

This confusing product strategy creates friction even for developers trying to use Google's AI:

"There's like these two APIs you can consume to use Gemini, and we're like, 'Why two?' One is from DeepMind and the other one is from GCP."

The speakers attribute this to Google's corporate culture, which differs significantly from startups:

"In a functioning startup... it goes up to some level and then ultimately the CEO or founders, and then they just say, 'Okay, well I see the points over here, I see the points over there, we're going this way.' But having lots of friends from Google, it doesn't seem like that's the culture there. There's a layer of VP and sort of management that is actually like, 'You guys just fight it out,' and so then you ship the org."

Timestamp: [19:21-20:18] Youtube Icon

๐Ÿ‰ Google's Hidden Advantage: TPUs

Using a Game of Thrones analogy, the speakers highlight Google's potential advantage in the AI race through their custom hardware.

"The crazy thing about Googleโ€”they probably should have won... There's almost like, I don't know where all this Game of Thrones analogy could be used. They might be a little bit like Daenerys Targaryen because they secretly have dragons. The dragons are the TPUs."

This hardware advantage could be decisive in making AI more affordable and accessible:

"This is one of the reasons why I think they could be the one company that could get a lot of the cost of intelligence to be very low, and they also have the engineering to be able to do a cost-effective large context windows."

The speakers suggest that while other AI labs have been limited in deploying large context windows due to costs, Google's custom Tensor Processing Units (TPUs) give them a unique capability to overcome this barrier.

Timestamp: [20:24-21:02] Youtube Icon

๐Ÿ’ป Sam Altman's Hardware Focus

The conversation briefly touches on OpenAI's strategic focus on computing infrastructure.

"I think they done it so well, and they got TPUs, which I think is smart for Sam. If you saw his little announcement, he's still the CEO of compute, quote unquote. So I'm sure they're probably working on something around there too."

This observation suggests that OpenAI recognizes the strategic importance of hardware in the AI race and is likely developing their own capabilities to compete with Google's TPU advantage.

Timestamp: [21:02-21:26] Youtube Icon

๐Ÿ’ฅ The Innovator's Dilemma in Search

The discussion turns to the classic business challenge known as the innovator's dilemma, as it applies to Google's search business.

"Classic innovator's dilemma. It's like if Google replaced google.com with Gemini Pro, it would instantly presumably be like the number one chatbot LLM service in the world, but that it would give up 80% of its revenue."

The speakers suggest that making this kind of radical transition requires a certain type of leadership:

"You would probably need a pretty strong founder CEO to do that. It's the kind of thing I can imagine Zuck doing. You can't imagine a hired CEO who's going to do that."

They note that Mark Zuckerberg has demonstrated this type of bold decision-making by renaming Facebook to Meta, showing a willingness to make radical strategic pivots despite short-term costs.

Timestamp: [21:26-21:56] Youtube Icon

๐Ÿ“ฑ Meta's Intrusive AI Implementation

The conversation shifts to criticize Meta's approach to integrating AI into its messaging platforms.

"I started using the Meta AI in WhatsApp. It's very classic. It makes me feel like Zuck is clearly still in charge of product because I don't think anyone else would launch it that way."

The implementation is described as invasive and poorly designed:

"You now have an AI system that's just in all of your chats, and you can just @ it, and it will just start talking in a group chat, and it feels quite invasive actually."

The speakers note that the AI's capabilities don't justify this intrusive approach:

"It's not that smart, and then it can't do anything. Most people are surprised that it's in there. It feels like having someone from Facebook just in your chats."

They compare this to previous controversial product launches at Facebook:

"It reminds me of like the original newsfeed launch or something. It's just like the classic Meta style of like, this is sort of, I don't know, objectively optimal. Like, I'm sure people will love it."

Timestamp: [21:56-22:52] Youtube Icon

๐Ÿง  Meta AI's Missing Context

The discussion continues with observations about Meta AI's puzzling limitations, particularly its inability to access Facebook's core social data.

"You have this Meta AI, and you ask it, 'Hey, who are my friends? I'm going to Barcelona next week, who are my friends in Barcelona?' And it's like, 'Sorry, as an AI, I actually don't have access to them.' It's like, what is the point of this?"

This critique highlights how Meta has failed to leverage its unique data advantage - the social graph - in creating an AI assistant that would have a clear differentiation from competitors.

Timestamp: [22:52-23:26] Youtube Icon

๐Ÿ“ Google's AI Integration Failures

The conversation references an essay by YC partner Pete Koomen that analyzes Google's problematic approach to AI integration in Gmail.

"Our partner Pete Koomen wrote this really great essay where he talked about the Gemini integration with Gmail, and he really like broke down in great detail why Google built this integration all wrong and how they should have built it. It's almost like he was a PM at Google. Oh wait, he was a PM at Google."

One key insight from the essay concerns system prompts versus user prompts:

"One of the things he pointed out was that you have a system prompt and a user prompt, and if you are actually going to empower your users, you actually allow your user to change the system prompt."

The system prompt is described as what's "imposed upon the user," limiting customization and resulting in rigid, formal outputs:

"The example is an email saying that Pete's going to be sick. He's like, 'Sorry, I'm not going to be able to come in,' and he asks the agent to write this letter, and it's very formal. And of course it is because there's no way to change the tone."

The speakers praise the blog post itself for its interactive nature:

"It's actually one the best blog posts, and I think he had to vibe-code the blog post itself, because you can actually try the prompts yourself on that web page."

Timestamp: [23:26-24:53] Youtube Icon

๐Ÿ’ก Startup Opportunity: AI-Native Blogging

The discussion concludes with the speakers identifying a potential startup opportunity inspired by Pete Koomen's interactive blog post.

"It's like in this interactive template thing language, which made me think it's time to start an AI-first vibe coding blog platform."

They jokingly describe it as "AI-Posterous", and while one speaker mentions they don't have time to build it themselves, they offer it as a free idea:

"That's a free idea for anyone watching. We'll fund it."

This highlights how the YC partners are constantly thinking about new startup opportunities that emerge from observing gaps in the current technology landscape.

Timestamp: [24:53-25:08] Youtube Icon

๐Ÿ’Ž Key Insights

  • Despite technical parity or superiority, Google's Gemini models have dramatically lower consumer usage than OpenAI's ChatGPT
  • First-mover advantage creates an "intangible moat" in AI, where being perceived as the best can outweigh actual technical superiority
  • Big tech companies are struggling with AI integration, creating products that underperform despite strong underlying technology
  • Google's organizational structure leads to competing AI products (Gemini vs. Vertex Gemini) from different internal teams
  • Google has a potentially decisive hardware advantage with TPUs that could dramatically reduce AI costs and enable larger context windows
  • The "innovator's dilemma" prevents Google from replacing search with AI chatbots despite technical capability, as it would sacrifice revenue
  • Meta's AI integration in messaging platforms feels invasive and poorly designed, reflecting leadership's "objectively optimal" product approach
  • Meta AI fails to leverage the company's key advantage (social graph data), limiting its usefulness and differentiation
  • Google's Gemini integration in Gmail lacks customization options by not allowing users to modify system prompts
  • Interactive AI-powered blogging platforms represent an emerging opportunity for startups

Timestamp: [17:42-25:08] Youtube Icon

๐Ÿ“š References

Companies:

  • Google - Discussed extensively regarding their Gemini AI models, TPUs, organizational challenges, and search business
  • OpenAI - Referenced for ChatGPT's dominant usage compared to Gemini despite Google's technical capabilities
  • DeepMind - Mentioned as one of the Google entities developing an API for Gemini
  • GCP (Google Cloud Platform) - Noted as another Google division with its own separate Gemini API
  • Microsoft - Mentioned for Windows Copilot, described as inferior to OpenAI's offerings
  • Meta (Facebook) - Discussed regarding their AI integration in WhatsApp and the "blue app" (Facebook)
  • WhatsApp - Messaging platform where Meta AI was described as intrusive
  • YC (Y Combinator) - Referenced for their internal work evaluating AI models

People:

  • Zuck (Mark Zuckerberg) - Discussed as an example of a founder CEO willing to make radical strategic changes
  • Sam (Altman) - Mentioned regarding his role as "CEO of compute" at OpenAI
  • Pete Koomen - YC partner and former Google PM who wrote an essay critiquing Google's Gemini integration

Technology Concepts:

  • Gemini Pro & Gemini 2.5 Pro - Google's AI models discussed throughout
  • TPUs (Tensor Processing Units) - Google's custom AI hardware described as their "dragons"
  • Large context windows - Capability enabled by advanced hardware like TPUs
  • System prompt vs. user prompt - Distinction in AI interfaces, with system prompts typically controlled by companies
  • Vibe coding - Interactive coding used in Pete Koomen's blog post
  • AI-first blogging platform - Startup idea proposed at the end of the segment

Cultural References:

  • Game of Thrones - Used as an analogy where Google is compared to Daenerys Targaryen with TPUs as "dragons"
  • Innovator's dilemma - Business concept referenced regarding Google's challenge in transitioning from search to AI

Timestamp: [17:42-25:08] Youtube Icon

๐Ÿ’น Tech-Enabled Services: The 2010s Wave

The conversation begins with a reflection on the "tech-enabled services" boom of the 2010s, a category of startups that aimed to combine software with operational services.

"Do you guys remember the tech-enabled services wave? For folks who didn't follow this in the 2010s, there was this huge boom in companies called tech-enabled services."

The speakers note that this approach emerged from an influential concept in the industry:

"It started with Balaji's blog post about full-stack startups. The concept was just that 'software eats the world' means software just kind of goes into the real world."

They explain the core premise behind these businesses:

"Instead of just having an app to deliver food, you should also have a kitchen that cooks the food and software to optimize the kitchen, and you just do everything. The full-stack startups in theory would be more valuable than just the software startups because they would do everything."

This approach promised to capture more value than pure software companies:

"Instead of just selling software to the restaurants and capturing 10%, you could just own the restaurant. You could capture 100%."

Timestamp: [25:15-25:24] Youtube Icon

๐Ÿ“Š Triple Bite: A Full-Stack Case Study

One of the speakers shares their firsthand experience running Triple Bite, which exemplified the tech-enabled services approach in the recruiting space.

"This is exactly what Triple Bite was. We were like, 'We're going to be a recruiting agency effectively. We're not selling software to recruiting agencies. We're actually just doing the whole thing.'"

The operational complexity went beyond just software:

"We also had recruiters on staff that were just there to help people negotiate salaries and match them to the right companies. It was very much in that wave of 'do everything.'"

The speaker notes that while Triple Bite achieved impressive growth by recruiting industry standards, it fell short compared to pure software startups:

"We actually got to like $20 million annual run rate, $24 million annual run rate within a few years. So like if you compared us to like a regular recruiting agency, it was like super fast. But if you compare it to like the top software startups, not that impressive."

Timestamp: [25:24-27:29] Youtube Icon

๐Ÿ“‰ The Fatal Flaw: Gross Margins

The conversation identifies the central problem that doomed many tech-enabled services companies: poor gross margins.

"That wave of startups generally forgot that you need gross margins. Fast forward, basically the short version is: it didn't really work, and the full-stack startups actually were not more valuable than the SaaS companies."

The speakers explain how this issue created a destructive cycle:

"The margins didn't work out particularly well, and so then you need to keep raising more capital. So if you were like a fearsomely good fundraiser, you could sort of do it and kind of push yourself."

However, this approach wasn't sustainable in the long run:

"Even in those cases, I think most of those businesses, at some point it just caught up with them. At some point, actually, we have to figure out a way to scale the business and have good margins and make this profitable and not just rely on the next fundraising round."

The speakers reference other examples that encountered similar challenges:

"You could argue ZS was one of those for insurance and a bunch of different HR related things. They basically relied too much on hiring more salespeople and more customer success people instead of actually building software that then would create gross margin."

Timestamp: [27:29-28:13] Youtube Icon

๐Ÿ”„ Learning From Failures: The Parker Conrad Approach

The conversation references how one entrepreneur learned from the mistakes of tech-enabled services.

"Parker Conrad said, 'Well, I'm not going to do that again, and I'm also going to force all the engineers to do the customer support so that they go on to build software that doesn't require so much support and thus there is gross margin.'"

This experience led to a broader industry realization:

"That was a whole lesson that the whole tech community learned collectively through the 2010s. If we learned one thing, it's gross margin matters a lot. You cannot and should not sell $20 bills for $10, because you're going to lose everything."

Timestamp: [28:13-28:43] Youtube Icon

๐Ÿง  Beyond Finances: The Focus Problem

One speaker explains that the problems with low-margin businesses extend beyond just financial considerations to how they affect a company's focus and operations.

"I think a sort of non-financial reason why the gross margins matter is low gross margin businesses usually mean you have some ops component, and then you have to run the ops component."

Drawing from personal experience at Triple Bite:

"There was a lot of brain power spent on 'How do we manage this team of contracted engineers? This team of humans looking after the essentially human recruiting team?' Lots of pieces of the business."

This operational complexity distracted from the core challenge:

"The existential issue we had is: How do we get to millions of engineers across the world all on our platform and all locked in? How do you just get lots of distribution?"

The speaker contrasts this with the advantages of high-margin businesses:

"Something that's nice about a high gross margin business is it's just a simpler product or a simpler company to run. You can actually just spend all of your time focused on: How do I make the product better? How do I get more users and get more distribution? So that you can keep that exponential growth for a decade."

This focus issue explains why many full-stack startups hit growth ceilings:

"I think a lot of full-stack startups partly plateaued out because they're complex businesses to run."

Timestamp: [28:43-29:43] Youtube Icon

๐Ÿข WeWork: The Ultimate Cautionary Tale

The conversation briefly touches on WeWork as an extreme example of the full-stack approach with poor margins.

"A very famous example of that was WeWork, which very much took it to the limit. The margins were not there. It was not - it didn't have the tech margins."

The speakers note WeWork's creative but ultimately unsuccessful attempt to position itself as a technology company:

"It had 'community-adjusted EBITDA,' which was very creative."

This reference to WeWork's controversial financial metric underscores how some companies tried to obscure their fundamental margin problems through financial engineering.

Timestamp: [29:43-30:01] Youtube Icon

๐Ÿ”„ Full-Stack 2.0: The AI Renaissance

The conversation takes an optimistic turn as the speakers suggest that AI is creating a second chance for the full-stack model.

"What I've been excited about recently is I think you can make a bullish case that now is the time to build these full-stack companies."

The key difference is how AI changes the operational economics:

"The Triple Bite 2.0s won't have to hire this huge ops team and have bad gross margins. They'll just have agents that do all the work. So now, actually, full-stack companies can look like software companies under the hood for the first time."

Timestamp: [30:01-30:25] Youtube Icon

โš–๏ธ Full-Stack Law Firms: Then and Now

The discussion uses legal services as a case study for how AI is transforming what's possible in full-stack businesses.

"Atrium, started by Justin Kan, full-stack law firm, didn't work out for a lot of these same reasons. But now I heard him say that, 'Look, we went in trying to use AI to automate large parts of it, and the AI was not good enough at that moment, but it's good enough now.'"

The speakers highlight a current YC company in this space:

"Within YC, we have Legora, which is like one of the fastest growing companies we've ever funded. It's not building a law firm, but they're essentially building AI tools for lawyers."

They predict how this approach will eventually evolve:

"But you can see where that's going to extend out to. Eventually, their agents are just going to do all of the legal work, and they'll be the biggest law firm on the planet. And I think that's a kind of full-stack startup that just wasn't possible pre-LLM."

Timestamp: [30:25-31:09] Youtube Icon

๐Ÿค– Knowledge Work Transformation

The speakers reflect on how the current wave of AI is transforming knowledge work in a way that enables new full-stack approaches.

"This started right when Uber and Lyft and Instacart and all of these companies were happening. And the thing is now, I mean, you can actually have LLMs do a lot of the knowledge work."

They note that AI capabilities continue to advance:

"Increasingly, it could actually have memory. This is one of the RFS's. It's literally you can have virtual assistants, but they become less and less virtual if they can also hire real people to do things for you."

Timestamp: [31:09-31:35] Youtube Icon

๐ŸŒ Virtual Assistant Marketplaces: The Previous Wave

The discussion acknowledges that virtual assistant services have been attempted many times before, but with limited success.

"Virtual assistant marketplaces was definitely like a whole category of companies for like 15 years, including Exec, where you build like a marketplace of people in the Philippines and other countries, and then you exposed a sort of like Airbnb UI."

However, these previous attempts fell short:

"I don't think any of them ever really got to become amazing businesses though."

This observation sets up the contrast with what might be possible now with AI-powered assistants that have greater capabilities.

Timestamp: [31:35-31:48] Youtube Icon

๐Ÿ› ๏ธ AI Infrastructure Opportunities

The conversation concludes by referencing Pete's blog post about system prompts, which leads to a broader discussion about the immaturity of AI tooling and infrastructure.

"Going back to Pete's post, I think the other thing that's interesting about the points he made around the system prompt and user prompt, and maybe we want to expose the system prompt to users a little bit more... it's an example of just how we're still so early in just using AIs and building agents."

The speakers highlight that this creates significant startup opportunities:

"There's all this tooling and infrastructure still to build. You have to do evals, you have to run the models, a whole bunch of stuff to build still. And so there's clearly still a bunch of startups yet to be built in just the infrastructure space around deploying AI and using agents."

This insight suggests that beyond full-stack applications, there's a parallel opportunity in building the tools that will enable those applications.

Timestamp: [31:48-32:25] Youtube Icon

๐Ÿ’Ž Key Insights

  • The tech-enabled services wave of the 2010s failed primarily due to poor gross margins that made scaling unsustainable
  • Full-stack startups required continuous fundraising to offset operational costs, creating dependency on capital markets
  • Low-margin businesses suffer not just financially but also from divided focus between operations and core product/distribution
  • Operational complexity in full-stack companies created a ceiling on growth compared to pure software businesses
  • AI now enables "Full-Stack 2.0" where agents can replace human operations while maintaining software-like margins
  • Previous attempts at full-stack models (like Atrium in legal) were limited by the AI technology available at the time
  • Current AI capabilities make knowledge work automation possible at a scale and quality that wasn't previously achievable
  • Virtual assistant marketplaces have been attempted for 15+ years but never achieved significant success
  • The AI infrastructure space remains underdeveloped, with substantial opportunities for tools around evaluation, deployment, and agent building
  • Companies that enable users to customize AI behavior (like modifying system prompts) may have advantages in usability and adoption

Timestamp: [25:15-32:25] Youtube Icon

๐Ÿ“š References

Companies:

  • Triple Bite - Tech-enabled recruiting company referenced as an example of full-stack approach
  • Atrium - Justin Kan's full-stack law firm that didn't succeed with earlier AI technology
  • Legora - Current YC-funded company building AI tools for lawyers with rapid growth
  • WeWork - Referenced as an extreme example of a full-stack company with poor margins
  • Zenefits (ZS) - Mentioned as relying too heavily on sales and customer success rather than software
  • Uber - Mentioned as part of the first wave of tech-enabled services companies
  • Lyft - Mentioned alongside Uber as part of the first wave
  • Instacart - Mentioned alongside Uber and Lyft as part of the first wave
  • Exec - Previous virtual assistant marketplace company

People:

  • Balaji - Referenced for his influential blog post about full-stack startups
  • Justin Kan - Founder of Atrium, the full-stack law firm
  • Parker Conrad - Cited for his approach of having engineers do customer support to improve software quality
  • Pete - YC partner whose blog post about system prompts was referenced

Business Concepts:

  • Tech-enabled services - Business model combining software with operational services that was popular in the 2010s
  • Full-stack startups - Companies that own the entire value chain rather than just providing software
  • Gross margins - Identified as the critical factor in business sustainability
  • Community-adjusted EBITDA - Creative financial metric used by WeWork
  • SaaS (Software as a Service) - Contrasted with tech-enabled services as having better margins
  • Annual run rate - Metric used to describe Triple Bite's growth ($20-24 million)
  • RFS (Request for Startups) - YC's way of signaling promising startup opportunities

Technology Concepts:

  • LLMs (Large Language Models) - AI technology enabling the new wave of full-stack companies
  • Virtual assistants - AI-powered helpers that can potentially replace human operations
  • System prompt vs. user prompt - Distinction in AI interfaces discussed in Pete's blog post
  • Evals - Evaluations needed for AI systems to ensure quality and performance

Timestamp: [25:15-32:25] Youtube Icon

๐Ÿ” The ML Ops Paradox

The conversation begins with a reflection on how perceptions of machine learning operations (ML ops) startups have dramatically changed over the years.

"Something that struck me about when I first came back to YC in 2020 is I remember a class of idea we weren't interested in funding was anything in the world of like ML, machine learning operations, or ML tools. I remember reading some applications and people like, 'Ah, like another ML ops team. These sort of never go anywhere.'"

This initial skepticism has been completely reversed by recent developments:

"Clearly if you were working on ML ops in 2020 and you just stuck it out for a few years, you're in the right spot."

One of the speakers shares their frustration from that earlier period:

"I remember I got so frustrated after years and years of funding these ML ops companies with really smart, really optimistic founders that just didn't go anywhere that I ran a query to count. I remember finding that, I think this was around 2019, we had more applications in 2019 for companies building ML tooling than we had applications for the customers of those companies."

This revealed a fundamental market problem:

"That was the core problem - these people were building ML tooling but there was no one to sell it to, because the ML didn't actually work. So there just wasn't anything useful that you could build with all this ML tooling. People didn't want it yet."

Timestamp: [32:31-33:45] Youtube Icon

โณ Timing Is Everything

The speakers reflect on how the ML ops companies were directionally correct but simply too early for the market.

"I mean, directionally it was absolutely correct. Like from a sci-fi level on a 10-year basis, it was beyond correct. Yes, it was just wrong for that moment."

This observation highlights the critical importance of timing in startup success, where being too early can be just as problematic as being too late.

Timestamp: [33:45-33:56] Youtube Icon

๐Ÿ’ช Persistence Pays Off: Replicate's Story

The conversation turns to success stories of companies that persisted through the challenging early days of ML infrastructure.

"Part of the lesson is sometimes it will take a bit of time for technology to catch up. And this company called Replicate that you work with stuck it out. It was from that era."

The speaker details Replicate's difficult journey:

"Replicate was from Winter '20, and they started the company right before COVID. And during the pandemic, it was going so poorly that they actually stopped working on it for several months and just didn't work on it, 'cause it wasn't clear that the thing had a future at all."

Their persistence eventually paid off spectacularly:

"They picked it back up and just started working on it quietly, but it basically was just like they were just building this thing in obscurity for two years until the image diffusion models came out, and then it just exploded like overnight."

Timestamp: [33:56-34:37] Youtube Icon

๐Ÿ”„ Ollama: Another Persistence Success

Another example is shared of an ML infrastructure company that persisted through challenging early days until the market caught up.

"The Ollama folks were also from that pandemic era, and similar story to Replicate. They were kind of trying to do different things around here too, and they were trying to work it out to make open source models deploy a lot better. They were also quietly working on it for a while. Things weren't really taking off."

The breakthrough came with a specific technological advance that created their market:

"Then suddenly, I think the moment for them was when Llama got released. That was like the easiest way for any developer to run open source models locally, and it took off because suddenly the interest to run models locally just took off when things started to work."

The speaker contrasts this with the earlier, less capable open source models:

"There were all these other open source models that were in Hugging Face, and especially the ones from like BERT models. Those were like the more used deep learning models. They were just okay, but not many people were using them because they weren't quite working."

Timestamp: [34:37-35:28] Youtube Icon

๐Ÿงญ The Follow Your Curiosity Approach

The discussion turns to what lessons can be drawn from these success stories, highlighting the value of intrinsic motivation over market calculation.

"Some of it is like, be on top of the oil well before the oil starts shooting out of the ground, but is that actionable?"

One speaker suggests that the common thread is founders following their genuine interests:

"It's kind of the classic startup advice of follow your own curiosity. Most of these teams, or almost all these teams, were working on it because they were just interested in ML. They wanted to deploy models, they were frustrated with the tooling, probably weren't necessarily commercially minded and trying to pick the best startup idea they could possibly work on."

This approach sometimes leads to being perfectly positioned when markets suddenly emerge:

"Sometimes you get lucky."

Timestamp: [35:28-36:04] Youtube Icon

๐Ÿ”„ Pivots and Persistence: More Success Stories

The conversation continues with additional examples of companies that navigated the early ML landscape through pivots or persistence.

"We were just sitting with Varun from Windsor, and he pivoted out of ML ops into code gen."

Another compelling example is shared in more detail:

"Deepgram is another one. Deepgram was one of the first companies I worked with back in 2016, and it was these two physics PhDs. They had done string theory, so they weren't even computer scientists, and they got interested in deep learning because they saw parallels with string theory."

Their motivation was purely intellectual:

"They found the mathematics to be elegant and interesting. That's really the origin. So they started working on deep learning before anybody really, and they built this speech-to-text stuff."

Like the other examples, their technology wasn't immediately successful:

"It just didn't really work that well for like a long time, and so nobody really paid much attention to this company. Wasn't famous."

The founders persisted despite the lack of recognition:

"The founders, to their credit, just kept working on it. And then when the voice agents took off, they all needed speech-to-text and text-to-speech, and most of them are actually using Deepgram under the hood. So they've just exploded in the last couple of years."

Timestamp: [36:04-37:04] Youtube Icon

๐Ÿ”ฎ The Ilya Sutskever Parallel

The speakers draw a parallel between these persistence stories and one of the key figures behind the current AI revolution.

"I mean, I guess essentially the whole AI revolution is built on Ilya Sutskever following his own curiosity for like a long period of time. We need more of that actually."

This observation highlights how some of the most transformative technological developments come from deep, sustained intellectual curiosity rather than market-driven development.

Timestamp: [37:04-37:16] Youtube Icon

๐Ÿ“š Outdated Startup Advice

The conversation pivots to how traditional startup advice may no longer be applicable in the AI era.

"We were at colleges - Tiรชn and I went on this college tour - and we spent several weeks speaking to college students. And I realized that there's this piece of startup advice that became canon that I think is outdated."

The speaker explains the historical context for this advice:

"Back in the pre-AI era, it was really hard to come up with good new startup ideas because the idea space had been picked over for like 20 years. And so a lot of the startup advice that people would hear would be like, 'You really need to sell before you build. You have to do detailed customer discovery and make sure that you've found a real new customer need.'"

This philosophy became formalized in influential startup methodologies:

"The Lean Startup, exactly. Fail fast, all this stuff. And that is still the advice that college students, I think, are receiving for the most part because it became so dominant."

Timestamp: [37:16-38:06] Youtube Icon

๐Ÿ”„ New Rules for the AI Era

The speaker proposes that the AI revolution requires a fundamentally different approach to startups.

"I would argue that in this new AI era, the right mental model is closer to what Harj said, which is just: use interesting technology, follow your own curiosity, figure out what's possible. And if you're doing that, if you're living at the edge of the future like PG said, and you're exploring the latest technology, there's so many great startup ideas. You're very likely to just bump into one."

Another speaker elaborates on why this approach is particularly effective now:

"The reason why it could work extra well today is that you apply the right prompts and the right data set and a little bit of ingenuity, the right evals, a little bit of taste, and you can get just magical output. And then that's still a secret, I think."

Timestamp: [38:06-38:49] Youtube Icon

๐Ÿ’ผ The Corporate AI Gap

The conversation turns to how even successful tech companies are slow to adopt and transform with AI.

"You can tell it's still a secret because you could look at thereโ€”like hundreds of unicorns out there that still exist and that are doing great, you know, growing year on year, have plenty of cash, all of that. But the number of them that are actually doing any sort of transformation internally, it's not that many."

The speaker expresses surprise at this slow adoption:

"A shocking few number of companies that are, you know, 100 to 1000 person startups that are going to be great businesses, but that class of startup, by and large, they are not entirely aware. Like, there isn't a skunkworks project in those things yet."

Where AI is being used, it's often in limited, personal capacities rather than strategically:

"The extent of it is, maybe the CEO is playing around with it. Maybe some of the engineers who are really forward-thinking are doing things in their spare time with it. Maybe they're using Windsor or Cursor for the first time."

The disconnect between technological capability and adoption is striking:

"You look down and you're like, 'What year is it?' It's a little bit like, 'Hey, you know, get on this.'"

Timestamp: [38:49-39:43] Youtube Icon

๐Ÿ˜ฒ The Surprised AI Researcher

The conversation concludes with an anecdote about AI researchers' surprise at the slow adoption of their revolutionary technology.

"Bob McGrew came on our channel, and he was just shocked. He was one of the guys as Chief Research Officer building what became GPT-4 and all these things. And then he releases it, and who's using it? He expected this crazy outpouring of like, 'Intelligence is too cheap to meter! This is amazing!'"

The reality was quite different:

"And it's like, actually, people are mainly justโ€”we're just still on our quarterly roadmap unchanged from, you know, even a year ago."

This disconnect between technological capacity and actual adoption highlights the opportunity space for founders who can bridge this gap.

Timestamp: [39:43-40:15] Youtube Icon

โœจ Closing Thoughts: The Golden Era

The host wraps up with an optimistic summary of the discussion.

"My main takeaway from this has been there's never a better time to build. So many ideas are possible today that weren't even possible a year ago. And the best way to find them is to just follow your own curiosity and keep building."

This closing statement encapsulates the core message of the conversation: that we're in an unprecedented period of opportunity for entrepreneurs willing to explore the frontiers of AI technology.

Timestamp: [40:15-40:34] Youtube Icon

๐Ÿ’Ž Key Insights

  • ML ops companies from 2019-2020 faced a market timing problem - they were building tools for ML applications that weren't yet viable
  • Founders who persisted through the challenging early days of ML infrastructure (like Replicate and Ollama) were eventually rewarded when the market caught up
  • Many successful AI infrastructure companies were founded by people following intellectual curiosity rather than market calculations
  • The release of specific models (like image diffusion models for Replicate or Llama for Ollama) created tipping points that suddenly validated previously struggling businesses
  • Traditional startup advice like "lean startup" and "fail fast" may be less relevant in the AI era than an exploration-based approach
  • Even with 100-1000 person tech companies, AI adoption remains surprisingly low, creating opportunities for new startups
  • AI researchers and developers are often surprised by how slowly their breakthrough technologies are being adopted in the mainstream
  • The current environment offers unprecedented opportunities for founders to create startups around capabilities that didn't exist even a year ago
  • Following curiosity and technological exploration is potentially more effective than market-driven customer discovery in the AI era
  • The gap between what's technically possible with AI and what most businesses are actually doing represents a massive opportunity space

Timestamp: [32:31-40:34] Youtube Icon

๐Ÿ“š References

Companies:

  • Replicate - ML infrastructure company from Winter '20 that persisted through difficult times until image diffusion models created explosive growth
  • Ollama - Company that built tools for deploying open source models locally, which took off after Llama's release
  • Windsor - Mentioned as having pivoted from ML ops to code generation
  • Deepgram - Speech-to-text company founded by physics PhDs that worked on the technology for years before voice agents created demand
  • Cursor - AI coding tool mentioned as being newly adopted by some forward-thinking engineers

People:

  • Varun - Founder of Windsor who pivoted from ML ops to code generation
  • Tiรชn - Mentioned as participating in a college tour with one of the speakers
  • PG (Paul Graham) - Referenced for his advice about "living at the edge of the future"
  • Bob McGrew - Former Chief Research Officer at OpenAI who was surprised by the slow adoption of AI technology
  • Ilya Sutskever - Co-founder of OpenAI mentioned as having followed his curiosity for a long time, leading to breakthroughs

Organizations:

  • YC (Y Combinator) - Startup accelerator where the speakers work and observe AI startup trends
  • Hugging Face - Repository for open source models mentioned as hosting earlier, less capable models like BERT

Technology Concepts:

  • ML ops (Machine Learning Operations) - Tools and infrastructure for deploying and managing machine learning models
  • Image diffusion models - AI models for generating images that created a breakthrough moment for Replicate
  • BERT models - Earlier generation of language models that weren't widely adopted due to limited capabilities
  • Llama - Open source large language model whose release created a market for Ollama
  • Speech-to-text/Text-to-speech - Technology developed by Deepgram that became essential for voice agents
  • Evals - Evaluation methodologies for assessing AI model performance

Business Concepts:

  • The Lean Startup - Business methodology focused on testing ideas before building products, described as potentially outdated for the AI era
  • Fail fast - Startup philosophy of quickly abandoning underperforming ideas, contrasted with the persistence needed in AI infrastructure
  • Customer discovery - Process of validating market needs before building products, described as less relevant in the current AI landscape
  • Skunkworks project - Internal innovation initiative, noted as absent in many established tech companies

Timestamp: [32:31-40:34] Youtube Icon