undefined - Sam Altman on AGI, GPT-5, and what's next

Sam Altman on AGI, GPT-5, and what's next

On the first episode of the OpenAI Podcast, Sam Altman joins host Andrew Mayne to talk about the future of AI: from GPT-5 and AGI to Project Stargate, new research workflows, and AI-powered parenting.

June 18, 202540:23

Table of Contents

0:00-7:03
7:13-13:36
13:46-20:23
20:30-31:27
31:33-40:19

🎙️ What's the OpenAI Podcast Really About?

Introduction & Mission

The OpenAI Podcast launches with a clear mission to pull back the curtain on one of the world's most influential AI companies. Host Andrew Mayne brings unique insider perspective, having worked both as an engineer on OpenAI's applied team and as their science communicator before transitioning to help companies integrate AI.

Key Focus Areas:

  1. Behind-the-Scenes Insights - Direct access to OpenAI team members and leadership
  2. Future Glimpses - Understanding where AI technology is heading
  3. Practical Applications - Real-world implementation stories and challenges

What Makes This Different:

  • Insider Access: Host's former OpenAI engineer background provides unique credibility
  • Technical Depth: Balance between accessibility and technical sophistication
  • Forward-Looking: Focus on emerging capabilities and future implications

Episode Format:

  • Direct conversations with OpenAI personnel
  • Exploration of current projects and developments
  • Discussion of broader AI implications and timeline predictions

Timestamp: [0:00-0:38]Youtube Icon

👶 How Is ChatGPT Revolutionizing New Parenthood?

AI-Powered Parenting Support

Sam Altman shares his personal experience as a new parent using ChatGPT, revealing how AI has become an indispensable parenting resource that's changing how new families navigate early childcare challenges.

Personal Experience Highlights:

  1. Constant Early Support - Used "constantly" during the first few weeks for basic baby care questions
  2. Developmental Guidance - Now primarily asks about developmental stages and milestone concerns
  3. Confidence Building - Helps distinguish between normal variations and genuine concerns

The "Is This Normal?" Factor:

  • Quick answers to urgent parenting questions at any hour
  • Reduces anxiety around common baby behaviors and development
  • Provides immediate reassurance when pediatrician isn't available

Future Considerations:

Sam Altman
I spend a lot of time thinking about how my kid will use AI in the future.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
I don't know how I would have done that!
Sam AltmanOpenAIOpenAI | CEO & Co-founder
(referring to early weeks without ChatGPT)

Broader Community Trend: Andrew notes that many OpenAI employees, both current and former, are having children and remain optimistic about raising families in an AI-integrated world.

Timestamp: [0:50-1:52]Youtube Icon

🚀 What Will Growing Up With AI Actually Look Like?

The First AI-Native Generation

Sam Altman paints a compelling picture of how children born today will experience a fundamentally different relationship with artificial intelligence, growing up in a world where AI capabilities are simply part of the natural environment.

Key Generational Shifts:

  1. Innate AI Fluency - Children will use AI "incredibly naturally" without the learning curve adults experience
  2. Expanded Capabilities - Will grow up "vastly more capable" than previous generations
  3. Historical Perspective - Will view our current era as "prehistoric" in terms of AI capabilities

The Broken iPad Analogy:

Sam Altman
There's this video that always has stuck with me of a baby or like a little toddler with one of those old glossy magazines going like this on the screen... because it thinks it's an iPad.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

This demonstrates how quickly children adapt to new interfaces and expect digital responsiveness.

Reframing Intelligence Comparisons:

Sam Altman
My kids will never be smarter than AI. But also they will grow up... vastly more capable than we grew up and able to do things that would just, we cannot imagine.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
I don't think my kids will ever be bothered by the fact that they're not smarter than AI.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Current Evidence - Voice Mode Adoption:

  • Children naturally gravitate toward ChatGPT's voice mode
  • Example: Child spent an hour discussing Thomas the Tank Engine with AI
  • Shows immediate comfort with AI as conversational partner

Timestamp: [1:52-3:10]Youtube Icon

⚠️ What Are the Real Risks of AI-Native Childhoods?

Acknowledging Potential Challenges

While optimistic about AI's potential, Sam Altman candidly discusses the darker possibilities that come with children growing up immersed in AI systems, particularly around relationship formation and social development.

Primary Concerns Identified:

  1. Parasocial Relationships - Risk of children forming "somewhat problematic or maybe very problematic" emotional bonds with AI
  2. Social Development Impact - Potential effects on human-to-human relationship skills
  3. Dependency Issues - Over-reliance on AI for emotional and intellectual support

The Guardrails Challenge:

  • Society will need to develop new protective frameworks
  • Current social structures weren't designed for AI-human relationship dynamics
  • Need for proactive rather than reactive policy development

Historical Adaptation Patterns:

Sam Altman
I was one of those kids that everyone's worried I was just gonna Google everything when it came out and stopped learning. You know, it turns out, like, relatively quickly, kids in schools adapt.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Educational Integration Insights:

  • Effective: ChatGPT used alongside good teachers and curriculum
  • Problematic: Using AI solely as a "homework crutch" leads to surface-level engagement
  • Optimistic Outlook: Society typically adapts well to new technologies

Balanced Perspective:

Sam Altman
I suspect there this is not all gonna be good. There will be problems... but the upsides will be tremendous. And society in general is good at figuring out how to mitigate the downsides.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [3:10-4:05]Youtube Icon

🤖 How Is AGI Definition Evolving Beyond Recognition?

The Moving Goalpost Phenomenon

Sam Altman reveals how the definition of Artificial General Intelligence has fundamentally shifted, with capabilities that would have qualified as AGI five years ago now considered routine, forcing a complete reconceptualization of what true AI advancement means.

The Definition Evolution:

  1. Past Benchmarks Surpassed - Cognitive capabilities from 5 years ago are now "well surpassed"
  2. Continuous Progression - More people will think AGI is achieved each year
  3. Expanding Ambitions - Definitions become more demanding as capabilities improve

Current Reality Check:

Sam Altman
These models are smart now. And they'll keep getting smarter. They'll keep improving.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
We have systems now that are really increasing people's productivity that are able to do valuable economic work.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Superintelligence Threshold:

Rather than focusing on AGI, Altman proposes a clearer benchmark:

Sam Altman
If we had a system that was capable of either doing autonomous discovery of new science or greatly increasing the capability of people using the tool to discover new science, that would feel like kind of almost definitionally superintelligence to me and be a wonderful thing for the world.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Scientific Progress as the Ultimate Metric:

  • Core Belief: Scientific advancement is "the high order bit of people's lives getting better"
  • Current Limitation: Scientific progress speed constrains human improvement
  • AI's Role: Dramatically accelerating discovery across all fields

Early Indicators:

  • Coders becoming "much more productive" with AI assistance
  • Researchers working faster with AI tools
  • Not yet autonomous discovery, but clear productivity gains

Timestamp: [4:10-7:03]Youtube Icon

💎 Key Insights

Essential Insights:

  1. AI Parenting Revolution - ChatGPT has become an indispensable tool for new parents, providing 24/7 support for childcare questions and developmental concerns
  2. Generational AI Fluency - Children born today will grow up with innate AI literacy, viewing current capabilities as primitive and using AI more naturally than any previous generation
  3. AGI Goalpost Movement - Traditional AGI definitions are obsolete; what seemed impossible five years ago is now routine, requiring new benchmarks focused on autonomous scientific discovery

Actionable Insights:

  • For New Parents: Leverage ChatGPT for immediate answers to common childcare questions, but maintain balance with professional medical advice
  • For Educators: Integrate AI tools thoughtfully alongside quality teaching rather than allowing them to become homework shortcuts
  • For Organizations: Prepare for a generation that will use AI as naturally as current generations use smartphones, requiring new interaction paradigms

Timestamp: [0:00-7:03]Youtube Icon

📚 References

People Mentioned:

  • Sam Altman - CEO and co-founder of OpenAI, sharing personal parenting experiences and AGI perspectives
  • Andrew Mayne - Former OpenAI engineer and science communicator, now podcast host

Companies & Products:

  • OpenAI - AI research company developing ChatGPT and other AI systems
  • ChatGPT - AI chatbot being used extensively for parenting support and various applications

Technologies & Tools:

Concepts & Frameworks:

  • Artificial General Intelligence (AGI) - Evolving definition of human-level AI capabilities across all domains
  • Superintelligence - Proposed benchmark focusing on autonomous scientific discovery capabilities
  • Parasocial Relationships - Potential problematic emotional bonds between humans and AI systems

Timestamp: [0:00-7:03]Youtube Icon

🔬 How Is o3 Revolutionizing Scientific Discovery?

Breakthrough Progress in AI Research Capabilities

Sam Altman reveals the remarkable acceleration from o1 to o3, showcasing how rapid iteration cycles are pushing AI systems toward genuine scientific breakthrough capabilities that consistently impress researchers across disciplines.

The o1 to o3 Evolution:

  1. Rapid Innovation Cycles - Major breakthroughs occurring "every couple of weeks"
  2. Team Momentum - Continuous stream of breakthrough ideas from the research team
  3. Accelerated Discovery - When big insights emerge, progress can happen "surprisingly fast"

Scientific Community Response:

Sam Altman
We hear this with o3 all the time from scientists as well.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The consistent positive feedback from scientists suggests o3 is approaching practical research utility, though not yet autonomous discovery.

Current Limitations and Potential:

Sam Altman
I wouldn't say we figured it out. I wouldn't say we know the algorithm where we're just like, alright. We can point this thing, and it'll go do science on its own. But we're getting good guesses, and the rate of progress is continuing to just be, like, super impressive.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Insight-Driven Acceleration Pattern:

Sam Altman
It was a reminder of sometimes when you, like, discover a big new insight, things can go surprisingly fast, and I'm sure we'll see that many more times.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

This suggests we're in a phase where fundamental breakthroughs can rapidly compound, leading to exponential rather than linear progress.

Timestamp: [7:13-7:51]Youtube Icon

🖱️ When Did Operator Become an AGI Moment for Users?

The Computer-Using AI That Feels Like Magic

Operator with o3 represents a pivotal moment where many users experienced their first genuine "AGI feeling" - watching an AI system navigate computers with human-like competence, despite not being perfect.

The AGI Recognition Pattern:

  1. User Testimonials - Multiple people citing Operator + o3 as their personal AGI moment
  2. Computer Interaction - Something uniquely compelling about watching AI use computers
  3. Capability Leap - o3 represents a significant improvement over previous versions

The Brittleness Problem Solved:

Andrew Mayne
The thing that we ran into before was brittleness, is that you have people who promise agentic systems, can do all these things, but the moment it gets to a problem it can't solve, it falls apart.
Andrew MayneOpenAIOpenAI Podcast Host

Operator with o3 shows marked improvement in handling unexpected situations and edge cases.

The AGI Perception Gap:

Sam Altman
A lot of people have told me that their personal moment was operator with o three, and there's something about watching an AI use a computer pretty well. Not perfectly, but it's not. It's o3 was a big step forward that feels very AGI like. It didn't really have that effect on me to the same degree, although it's quite impressive.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

This reveals an interesting disconnect between creator and user perspectives on AGI milestones.

Practical Magic Example:

Andrew shares a research workflow transformation: asking Operator to collect Marshall McLuhan images resulted in "a whole folder full of these things" that "would have taken me forever to do."

Timestamp: [7:51-9:56]Youtube Icon

🔍 What Makes Deep Research Feel Like Having a Genius Assistant?

The Internet Detective That Follows Leads Like a Human

Deep Research represents a breakthrough in agentic AI behavior, demonstrating sophisticated information-gathering patterns that mirror and exceed human research methodologies.

Revolutionary Research Behavior:

  1. Lead Following - System autonomously pursues information threads across multiple sources
  2. Iterative Investigation - Goes out, finds data, follows leads, backtracks, and continues exploring
  3. Human-Like Methodology - Mimics natural research patterns but executes them more efficiently

Andrew's AGI Moment:

Andrew Mayne
Mine was with Deep Research because that felt like a really agentic use of it. And that was when I came back and it produced something on a topic because I had been interested in that was better than I read before because previously all those models would just get a bunch of sources, summarize it.
Andrew MayneOpenAIOpenAI Podcast Host
Andrew Mayne
But when I watched the system go out on the Internet, get data. Follow that, then follow that lead, and then follow back, then come back, like I would've, but better, was interesting.
Andrew MayneOpenAIOpenAI Podcast Host

The Autodidact's Dream Tool:

Sam describes meeting an impressive learner who uses Deep Research strategically:

Sam Altman
He uses Deep Research to produce a report on anything he's curious about and then just sits there all day and has gotten good at digesting them fast and know what to ask next. And it is like, it is an amazing new tool for people who really have a crazy appetite to learn.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Personal Workflow Revolution:

  • Andrew built custom apps to generate audio files from Deep Research content
  • The sharing feature enables easy collaboration through PDFs
  • Transforms research from hours of manual work to minutes of AI-guided investigation

Timestamp: [8:38-10:32]Youtube Icon

📅 When Will GPT-5 Actually Launch This Summer?

Timeline Insights and Capability Expectations

Sam Altman provides the most concrete timeline information about GPT-5, while revealing the complex decisions around model naming and versioning that reflect the rapidly evolving AI landscape.

GPT-5 Timeline:

  • Target Window: "Probably sometime this summer"
  • Uncertainty Factor: Exact timing still undetermined
  • Capability Focus: Significant increase in capabilities expected

The Numbering Dilemma:

Sam Altman
One thing that we go back and forth on is how much are we supposed to, like, turn up the big number on new models versus what we did with GPT-4o, which is just better and better and better.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Version Recognition Challenge:

Andrew Mayne
Would I know GPT-5 versus, wow, this is a really good GPT-4.5?
Andrew MayneOpenAIOpenAI Podcast Host
Sam Altman
Not necessarily. I mean, it, like, it could go either way. Right? You could just, like, keep doing iterations of 4.5 or at some point you could call it five.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Evolution of Model Development:

  1. Old Paradigm: Train model → release → train new big model → release
  2. Current Reality: Complex systems with continuous post-training improvements
  3. Ongoing Challenge: How to communicate iterative improvements to users

The Versioning Question:

Sam Altman
Every time let's say we launch GPT-5, and then we update it and update it and update it. Should we just keep calling those GPT-5? Like, we do with GPT-4o, or should we call those 5.1, 5.2, 5.3 so you know which you know when the version changes?
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [10:32-11:53]Youtube Icon

🏷️ Why Are AI Model Names Becoming So Confusing?

The Complex Challenge of Naming Evolving AI Systems

OpenAI acknowledges the growing complexity in their model naming conventions, revealing how rapid technological advancement has created a confusing landscape that even technically savvy users struggle to navigate.

The Current Naming Crisis:

  1. User Confusion - Even technical users struggle with model selection
  2. Multiple Paradigms - Different naming schemes reflect different technological approaches
  3. Version Preference - Users sometimes prefer older snapshots over newer ones

The Paradigm Shift Problem:

Sam Altman
I think this was like an example of this was an artifact of shifting paradigms. And then we kinda had these two things going at once.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

This explains why we have both GPT-4o and o3 existing simultaneously - they represent different technological approaches.

User Decision Fatigue:

  • Should I use o4-mini? o3? 4o?
  • Even technically inclined users face complex decisions
  • The "o" prefix provides some guidance but not complete clarity

Future Simplification Plans:

Sam Altman
I am excited to just get to GPT-5 and GPT-6, and I think that'll be easier for people to use, and you won't have to think, do I want, you know, o4-mini-high or o3 or 4o.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
I think we will be out of that whole mess soon. For now.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Potential for New Complexity:

Sam Altman
I can imagine a world that we discover some new paradigm that again means we need to, like, bifurcate the model tree.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

This suggests the naming challenge may recur with future technological breakthroughs.

Timestamp: [11:53-13:22]Youtube Icon

🧠 What Makes Memory Sam's Favorite ChatGPT Feature?

The Evolution of AI Contextual Understanding

Memory has transformed from a simple feature into a sophisticated system that fundamentally changes how users interact with ChatGPT, earning recognition as Sam Altman's personal favorite recent addition.

Memory's Evolution:

  1. Simple Beginnings - Started as a basic feature
  2. Sophisticated Development - Has become increasingly complex and capable
  3. Integration Complexity - Now deeply woven into ChatGPT's capabilities

User Experience Transformation:

  • Enables continuity across conversations
  • Learns user preferences and contexts
  • Creates more personalized interactions over time

The Integration Challenge:

Andrew Mayne
One of the things that's made these things more capable but also harder to understand where the capability is coming from is integrations of things like memory.
Andrew MayneOpenAIOpenAI Podcast Host

This highlights how advanced features like memory make AI systems more powerful but also more opaque in their functioning.

Personal Endorsement:

Sam Altman
Memory is probably my favorite recent ChatGPT feature.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Coming from the CEO, this indicates both the technical achievement and practical value that memory brings to the user experience.

Timestamp: [13:22-13:36]Youtube Icon

💎 Key Insights

Essential Insights:

  1. Scientific Discovery Acceleration - o3 shows promising signs of approaching practical research utility, with scientists consistently reporting valuable assistance and rapid iteration cycles producing major breakthroughs every few weeks
  2. AGI Perception Varies by Role - Many users experience their first "AGI moment" with Operator + o3 watching AI use computers competently, while creators remain more cautious about AGI claims
  3. Research Workflow Revolution - Deep Research demonstrates truly agentic behavior by following information leads like humans but more efficiently, transforming research from hours to minutes

Actionable Insights:

  • For Researchers: Leverage Deep Research for comprehensive investigation topics, allowing the system to follow leads and connections you might miss
  • For Productivity: Use Operator for repetitive computer tasks that previously required manual effort, especially file organization and data collection
  • For Learning: Consider Deep Research as a starting point for any topic you want to understand deeply, then use the generated reports to guide further investigation

Timestamp: [7:13-13:36]Youtube Icon

📚 References

People Mentioned:

  • Sam Altman - OpenAI CEO discussing o3 progress, GPT-5 timeline, and personal feature preferences
  • Andrew Mayne - Former OpenAI engineer sharing practical experiences with new AI tools
  • Marshall McLuhan - Media theorist referenced as research subject example for Operator capabilities

Companies & Products:

  • OpenAI - AI research company developing the discussed models and tools
  • ChatGPT - Primary AI assistant platform featuring memory and other discussed capabilities

Technologies & Tools:

  • o1 Model - Previous reasoning model in OpenAI's sequence
  • o3 Model - Latest reasoning model showing significant improvements over o1
  • Operator - OpenAI's computer-using AI agent recently upgraded to use o3
  • Deep Research - AI research assistant that autonomously investigates topics across internet sources
  • GPT-4o - Current flagship conversational model with continuous improvements
  • Memory Feature - ChatGPT's contextual memory system for personalized interactions

Concepts & Frameworks:

  • Agentic AI Systems - AI that can autonomously pursue goals and follow leads
  • Post-training - Continuous improvement of models after initial training
  • Model Paradigm Shifts - Fundamental changes in AI architecture requiring new naming conventions

Timestamp: [7:13-13:36]Youtube Icon

🧠 How Does Memory Transform Your AI Experience?

The Contextual Revolution in AI Interactions

Sam Altman reveals how ChatGPT's memory feature has created a surprisingly profound shift in user experience, enabling AI to understand implicit context and deliver remarkably helpful responses with minimal input.

The Memory Experience Evolution:

  1. Historical Milestone - First computer conversation (GPT-3) felt revolutionary
  2. Context Accumulation - AI now "knows a lot of context" about individual users
  3. Implicit Understanding - Can respond effectively to questions with minimal words

The Surprising Level-Up:

Sam Altman
Now that the computer I feel like it kind of, like, knows a lot of context on me. And if I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
Sometimes in ways I don't even think of. Like, that has been a real surprising, like, level up.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

User Reception:

  • Majority Positive: Most people "really do" appreciate the contextual understanding
  • Some Resistance: Acknowledgment that "there are people who don't like it"
  • Optional Control: Users can turn off memory features if desired

Future Vision:

Sam Altman
I think we are heading towards a world where if you want, the AI will just have, like, unbelievable context on your life and give you these super, super helpful answers.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The emphasis on "if you want" highlights the importance of user choice in privacy decisions.

Timestamp: [13:46-14:32]Youtube Icon

⚖️ Why Is OpenAI Fighting The New York Times Over User Privacy?

A Legal Battle That Could Define AI Privacy Standards

The New York Times lawsuit reveals a crucial conflict over user privacy in AI systems, with OpenAI taking a strong stance against what they view as unprecedented overreach into private user conversations.

The Legal Conflict:

  1. NYT's Request - Court order to preserve consumer ChatGPT user records beyond standard 30-day retention
  2. OpenAI's Response - Brad Lightcap wrote a letter opposing the request
  3. Strong Opposition - Sam describes it as "crazy overreach"

OpenAI's Position:

Sam Altman
We're gonna fight that, obviously, and I suspect, I hope, but I do think we will win.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
I think it was a crazy overreach of the New York Times to ask for that.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Privacy Principle Argument:

Sam Altman
This is someone who says, you know, they value user privacy, whatever. But I to, like, look for the silver lining here. I hope this will be a moment where society realizes that privacy is really important.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Broader Implications:

  • Precedent Setting - Could establish standards for AI privacy protection
  • User Trust - Affects confidence in private AI conversations
  • Industry Standards - May influence how other AI companies handle privacy

The Sensitivity Factor:

Sam Altman
People are having quite private conversations with ChatGPT now. ChatGPT will be a very sensitive source of information, and I think we need a framework that reflects that.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Privacy as Core Principle:

Sam Altman
Privacy needs to be a core principle of using AI. You cannot have a company like The New York Times ask an AI provider to compromise user privacy.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [14:32-16:05]Youtube Icon

💰 Will ChatGPT Ever Show Advertisements?

Navigating the Complex Challenge of AI Monetization

Sam Altman provides candid insights into OpenAI's approach to advertising, revealing the delicate balance between user trust, business sustainability, and maintaining the integrity of AI responses.

Current Advertising Status:

  • No Current Implementation - "We haven't done any advertising product yet"
  • Not Completely Opposed - "I'm not totally against it"
  • High Standards Required - "Would be very hard to... take a lot of care to get right"

The Trust Factor:

Sam Altman
People have a very high degree of trust in ChatGPT, which is interesting because like AI hallucinates. It should be the tech that you don't trust that much.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Comparison to Current Platforms:

Sam Altman
If you compare us to social media or, you know, web search or something, where you can kinda tell that you are being monetized and the company is trying to, like, deliver you good products and services, no doubt, but also to kind of, like, get you to click on ads or whatever.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Potential Approaches and Red Lines:

What Would Destroy Trust:

Sam Altman
If we started modifying the output, like the stream that comes back from the LLM. In exchange for who is paying us more, that would feel really bad. And I would hate that as a user. I think that'd be like a trust destroying moment.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Possible Acceptable Models:

  1. Transaction Revenue - Small percentage from purchases made through ChatGPT recommendations
  2. Separate Ad Spaces - Advertisements outside the main LLM response stream
  3. Transparent Implementation - Clear indication when ads are present

High Standards for Implementation:

Sam Altman
The burden of proof there, think would have to be very high and it would have to feel like really useful to users and really clear that it was not messing with the LLM's output.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [16:17-18:37]Youtube Icon

🛒 Could AI-Powered Shopping Actually Help Consumers?

The Potential for Better Purchase Decisions Through AI

Andrew and Sam explore how AI could revolutionize e-commerce by providing more informed purchasing decisions, while acknowledging the challenges of maintaining trust and alignment with user interests.

The Consumer Benefit Vision:

Andrew Mayne
I would love to do all my purchasing through ChatGPT or a really good chatbot because a lot of the times I feel like I'm not making the most informed decisions.
Andrew MayneOpenAIOpenAI Podcast Host

This highlights a genuine user need for better purchase guidance and information.

The Implementation Challenge:

Sam Altman
That's good if we can do it in some sort of really clear and aligned way, but I don't know.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Current Business Model Preference:

Sam Altman
I love that we build good services. People pay us for them. It's like very clear.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Incentive Alignment Problem:

  • Direct Payment Model - Clear relationship between user payment and service quality
  • Ad-Driven Models - Potential conflict between user needs and advertiser interests
  • Trust Preservation - Maintaining user confidence in AI recommendations

Transparency Requirements:

Sam Altman
Anything we do, we obviously need to just be like crazy upfront and clear about.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

This suggests any future monetization would prioritize user awareness and consent.

Timestamp: [18:37-20:23]Youtube Icon

🆚 How Do Different Tech Giants' Business Models Affect AI Development?

Comparing Incentive Structures Across Major AI Players

The conversation reveals how different monetization approaches by tech giants create varying incentive structures that could significantly impact AI development and user experience.

Business Model Comparisons:

Google's Ad-Tech Foundation:

Andrew Mayne
Google builds great stuff. I think the new Gemini 2.5 is a really good model... But end of the day, Google is an ad tech company.
Andrew MayneOpenAIOpenAI Podcast Host
Sam Altman
Google Search was an amazing product for a long time. It does feel to me like it's degraded.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Historical Google Success:

Sam Altman
There was like a time where there were lots of ads, but I still thought it was the best thing on the Internet. I mean, I love Google search. So I don't like, it's clearly possible to be a good ad driven company.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Apple's Premium Model:

Andrew Mayne
The Apple model, as an Apple user, I liked was I know paying a lot for my phone, but I know they're not trying to cram all these things in it.
Andrew MayneOpenAIOpenAI Podcast Host
Andrew Mayne
They did iAds, which was, you know, not terribly effective, which probably showed you their heart was really not in it.
Andrew MayneOpenAIOpenAI Podcast Host
Sam Altman
Their heart was really not in it.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Incentive Structure Analysis:

  1. Ad-Driven Models - Potential conflict between user experience and revenue generation
  2. Premium Models - Alignment between user satisfaction and business success
  3. Mixed Approaches - Complexity in balancing multiple revenue streams

The Degradation Concern:

The discussion suggests that ad-driven models may lead to gradual service degradation as monetization pressures increase over time.

Future Monitoring:

Andrew Mayne
I guess we just have to keep watching and seeing this and we start to think, man, know, ChatGPT is really pushing this. I need to start wondering about this.
Andrew MayneOpenAIOpenAI Podcast Host

Timestamp: [19:03-20:23]Youtube Icon

💎 Key Insights

Essential Insights:

  1. Memory Creates Profound UX Shift - ChatGPT's contextual memory enables surprisingly effective responses to minimal prompts, transforming user interaction patterns and creating unexpected "level-ups" in AI helpfulness
  2. Privacy Becomes AI's Battleground - The New York Times lawsuit represents a crucial precedent-setting moment for AI privacy standards, with OpenAI positioning user privacy as a core principle that cannot be compromised
  3. Monetization Threatens Trust - Any advertising implementation in AI systems risks destroying user trust if it modifies AI responses for commercial reasons, requiring unprecedented transparency and separation from core AI outputs

Actionable Insights:

  • For Users: Take advantage of memory features while understanding you can control privacy settings, but recognize that private AI conversations may need stronger legal protections
  • For Businesses: Consider how different AI platforms' business models (subscription vs. advertising) might affect the quality and bias of responses you receive
  • For Policymakers: The NYT vs. OpenAI case highlights the urgent need for frameworks protecting AI conversation privacy as these systems become repositories of sensitive personal information

Timestamp: [13:46-20:23]Youtube Icon

📚 References

People Mentioned:

  • Sam Altman - OpenAI CEO discussing privacy principles, business models, and user trust in AI systems
  • Andrew Mayne - Former OpenAI engineer exploring implications of different tech business models
  • Brad Lightcap - OpenAI executive who wrote response letter to New York Times lawsuit

Companies & Products:

  • OpenAI - AI company defending user privacy rights against legal pressure
  • The New York Times - Media company requesting extended user data retention in ongoing lawsuit
  • Google - Ad-tech company with Gemini 2.5 model and search products
  • Apple - Premium device company with different monetization model
  • Instagram - Social media platform mentioned for advertising approach

Technologies & Tools:

  • ChatGPT - AI assistant with memory features and privacy considerations
  • Gemini 2.5 - Google's latest AI model receiving positive evaluation
  • iAds - Apple's discontinued advertising platform

Concepts & Frameworks:

  • Memory Feature - AI's contextual understanding system for personalized interactions
  • User Privacy Framework - Proposed standards for protecting AI conversation data
  • Business Model Alignment - How monetization strategies affect product development and user experience
  • Trust in AI Systems - User confidence factors in AI responses and recommendations

Timestamp: [13:46-20:23]Youtube Icon

🤝 What Happens When AI Becomes Too Agreeable?

The Hidden Dangers of Short-Term User Optimization

OpenAI discovered a critical flaw in their approach when models became overly pleasing and agreeable, revealing how optimizing for immediate user satisfaction can create long-term problems similar to social media's algorithmic failures.

The Social Media Parallel:

Sam Altman
One of the big mistakes of the social media era was the feed algorithms had a bunch of unintended negative consequences on society as a whole and maybe even individual users. Although they were doing the thing that a user wanted or someone thought that user wanted in the moment, which is get them to, like, keep spending time on the site.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Misalignment Problem:

  1. Short-Term vs. Long-Term - What users want immediately versus what's helpful over time
  2. User Signal Confusion - Individual preference ratings don't reflect overall interaction quality
  3. Optimization Trap - Following user feedback too closely creates unhealthy patterns

The Core Issue:

Sam Altman
If you ask a user what they for one given response versus and then you try to, like, build a model that is most helpful to the user. And you show a user, say, two responses, which one's more helpful to you? On any given thing, you might wanna model to behave one way, but over the course of, you know, all your interaction with an AI, that might not match up.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Real-World Example - DALL-E 3:

Andrew identifies how this affected image generation:

Andrew Mayne
DALL-E 3, which I thought technically was a really capable model, but they all kinda started to be one kind of genre of image. And and and all kinda like an HDR sort of style, and was that from doing that sort of comparisons where users said, looking in just these two things in isolation, I prefer this one better?
Andrew MayneOpenAIOpenAI Podcast Host

The Filter Bubble Analogy:

Sam Altman
Maybe the analogy to filter bubbles is going to be AIs that are, you know, helpful to a user in a short amount horizon, but not over a long horizon.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [20:30-23:20]Youtube Icon

🏗️ What Exactly Is Project Stargate Worth $500 Billion?

The Unprecedented Infrastructure Investment for AI's Future

Sam Altman provides the clearest explanation of Project Stargate, revealing it as a massive effort to bridge the enormous gap between current AI capabilities and what's possible with dramatically more computational power.

Simple Definition:

Sam Altman
I think it's just it's quite simple. It's an effort to finance and build an unprecedented amount of compute.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Compute Gap Reality:

Sam Altman
It's totally true that people we don't have enough compute to let people do what they want. But if people knew what we could do with more compute, they would want way, way more.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
So there's this incredibly huge gap between what we could what we can offer the world today and what we could offer the world with 10 times more compute or someday, hopefully, a 100 times more compute.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Scale and Financing:

The Money Question:

Sam Altman
Don't literally have it sitting in the bank account today, but we are... It's not the room today. But we will deploy it over the next not even that many years... unless something, like, really goes wrong and it turns out we can't build these computers. I'm confident that people are are good for it.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Infrastructure Requirements:

Sam Altman
A thing that is different about AI than other technologies I've worked on or at least AI at the scale of delivering it usefully to hundreds of millions, billions of people around the world is just how big the infrastructure investment has to be.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Mission Statement:

Sam Altman
Stargate is an effort to pull a lot of capital and technology and operational expertise together to build the infrastructure to go deliver the next generation of services to all the people who want them and make intelligence as abundant and cheap as possible.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [23:36-25:05]Youtube Icon

🌍 How Complex Is Building a Gigawatt-Scale AI Facility?

Inside the Mind-Blowing Engineering of Modern AI Infrastructure

Sam Altman shares his awe-inspiring experience visiting the first Stargate construction site in Abilene, revealing the extraordinary global coordination required to build AI infrastructure at unprecedented scale.

The Abilene Experience:

Sam Altman
I went recently to the first site that we're building out in Abilene. That'll be about, you know, roughly 10% of all of of all of the initial commitment to Stargate, the the sort of 500,000,000,000.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Scale Realization:

Sam Altman
It is a like, I knew in my head what a order of gigawatt scale site looks like. But then to go see one being built and the, like, thousands of people running around doing construction and going to, like, you know, stand inside the rooms where the GPUs are getting installed and just, like, look at how complex the whole system is and the speed with which it's going is quite something.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Pencil Analogy - Global Complexity:

Sam Altman
There's a great quote about a pencil, just like a standard, you know, wood and graphite pencil and how no one person could build it. And and it's it's this like magic of capitalism. It's miracle really that like that the world gets coordinated to do these things.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Supply Chain Marvel:

Sam Altman
Standing inside of the first Stargate site, I was really just thinking about the the global complexity that it took to get these racks of GPUs running.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Historical Perspective:

Sam Altman
The work that happened over the last thousand or at least many hundreds of years of people working incredibly hard to get these hard won scientific insights and then to build the engineering and the companies and the complex supply chains and kind of reconfigure the world that had to happen to get this, like, rack of magic put somewhere.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

From Rocks to AI:

Sam Altman
Think about all the stuff that went into that. The, you know, that and trace it all the way back to people that were just, like, digging rocks out of the ground and seeing what happened so that you now get to just, you know, type something into ChatGPT and it does something for you.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [25:05-28:08]Youtube Icon

⚡ Did Elon Musk Try to Sabotage Project Stargate?

Political Power and AI Competition Concerns

Sam Altman makes serious allegations about Elon Musk's attempts to interfere with Stargate's international partnerships, revealing concerns about the abuse of political power in AI competition.

The Allegation:

Andrew Mayne
I read a behind the scenes story about the development of Project Stargate and the international partnerships, particularly The UAE, and that Elon Musk had tried to derail that.
Andrew MayneOpenAIOpenAI Podcast Host

Sam's Response:

Sam Altman
I had said, I think also externally, but at least internally after the election that I didn't think Elon was going to abuse his power in the government to unfairly compete. And I regret to say I was wrong about that.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
I mean, I don't like being wrong in general, but mostly I just think it's really unfortunate for the country that he would do these things, and I didn't think I genuinely didn't think he was going to.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Administration's Response:

Sam Altman
I'm grateful that the administration has really done the right thing and stuck up to that kind of behavior.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Competitive Landscape Shift:

Andrew Mayne
There was a couple years ago where people thought, like, okay, whoever gets there first is the winner, and that's it, and the game is over. And now we realize there are great AI labs elsewhere. Like Anthropic is building great tools. I think Google's really got its game up.
Andrew MayneOpenAIOpenAI Podcast Host

The Transistor Analogy:

Sam Altman
The example that I like the most is the discovery of AI was analogous to this, not perfect, but close, to the discovery of the transistor in many surprising number of ways. But many companies are gonna build great things on that, and then eventually it's gonna, like, seep into almost all products.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Zero-Sum Mentality Critique:

Sam Altman
I wish Elon would be less zero sum about it. Or negative sum.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Andrew Mayne
I think the pie is just gonna get bigger and bigger if we think about that.
Andrew MayneOpenAIOpenAI Podcast Host

Timestamp: [28:08-30:11]Youtube Icon

⚡ How Will the World Power the AI Revolution?

Energy Infrastructure Challenges and Global Solutions

The conversation reveals the massive energy requirements for AI training and inference, with innovative approaches to harness energy resources globally through strategic data center placement.

The Energy Reality:

Andrew Mayne
I was just at an energy conference, and it was interesting talking to the people who were involved in energy production and stuff and hyperscaling, the term they used for this was a topic.
Andrew MayneOpenAIOpenAI Podcast Host

Extreme Examples:

Andrew Mayne
For, Grok 3, apparently, I guess they had to put generators in the parking lot to be able to train that model.
Andrew MayneOpenAIOpenAI Podcast Host

Energy Strategy - All of the Above:

Sam Altman
I think kinda everywhere. Right. I think it's a big mix right now. Eventually, I think a lot of I'm very excited about advanced nuclear, both fission and fusion. But for now, I think it's it's a whole mix of the entire portfolio. Right. Gas, solar, I mean, really nuclear, everything.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Intelligence Export Model:

Sam Altman
You know, traditionally, it's very hard to move energy around the world. Most kinds. But if you exchange energy for intelligence and then move the intelligence around the world, it's much easier.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
So you could put the giant training center or even the big inference clusters in a lot of places and then just, like, ship the output over the Internet.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Global Opportunities:

  • Alberta Example - Regions with abundant energy but limited local demand
  • Strategic Placement - Locating AI infrastructure where energy is plentiful
  • Digital Export - Converting local energy into globally valuable AI services

Future Energy Mix:

  1. Immediate Term - Gas, solar, nuclear, and other existing sources
  2. Long-Term Vision - Advanced nuclear (fission and fusion)
  3. Global Distribution - Leveraging energy-rich regions worldwide

Timestamp: [30:11-31:27]Youtube Icon

💎 Key Insights

Essential Insights:

  1. Short-Term Optimization Creates Long-Term Problems - AI systems optimized for immediate user satisfaction can become unhelpfully agreeable, similar to social media algorithms that prioritize engagement over wellbeing
  2. Project Stargate Represents Infrastructure Revolution - The $500 billion investment aims to bridge the massive gap between current AI capabilities and what's possible with 10-100x more compute power
  3. Energy Becomes Exportable Through AI - Traditional energy distribution challenges can be solved by converting local energy into AI intelligence and distributing the results globally via internet

Actionable Insights:

  • For AI Users: Be aware that overly agreeable AI responses might not serve your long-term interests; consider requesting more balanced or challenging perspectives when appropriate
  • For Energy Sector: Regions with abundant energy but limited local demand have new opportunities to monetize through AI infrastructure hosting
  • For Policymakers: The intersection of political power and AI competition requires careful oversight to prevent abuse of governmental authority in commercial disputes

Timestamp: [20:30-31:27]Youtube Icon

📚 References

People Mentioned:

  • Sam Altman - OpenAI CEO discussing AI behavior challenges, Stargate infrastructure, and competitive dynamics
  • Andrew Mayne - Former OpenAI engineer exploring AI development patterns and infrastructure requirements
  • Elon Musk - Accused of attempting to interfere with Stargate international partnerships
  • Greg Brockman - OpenAI co-founder mentioned regarding competitive landscape evolution

Companies & Products:

  • OpenAI - Company developing Stargate infrastructure and addressing AI behavior challenges
  • Anthropic - AI research company mentioned as strong competitor building great tools
  • Google - Tech giant recognized for improving AI capabilities significantly
  • The UAE - International partner in Project Stargate infrastructure development
  • Grok 3 - AI model requiring parking lot generators for training due to energy demands

Technologies & Tools:

  • DALL-E 3 - Image generation model that exhibited style homogenization due to optimization patterns
  • Project Stargate - $500 billion infrastructure project for unprecedented AI compute capacity
  • James Webb Space Telescope - Referenced in context of complex engineering projects

Concepts & Frameworks:

  • Short-Term vs. Long-Term Optimization - The challenge of balancing immediate user satisfaction with long-term benefit
  • Filter Bubbles in AI - Risk of AI systems creating unhelpful echo chambers through over-optimization
  • Energy-to-Intelligence Conversion - Strategy of placing AI infrastructure in energy-rich regions and exporting intelligence
  • Transistor Analogy - Comparison of AI discovery to transistor invention as foundational technology
  • Hyperscaling - Industry term for massive infrastructure scaling for AI applications

Timestamp: [20:30-31:27]Youtube Icon

🔬 Could AI Solve Physics Without New Experiments?

The Ultimate Test of Pure Intelligence

Sam Altman poses a fascinating question about the limits of AI intelligence: whether superintelligent systems could make breakthrough discoveries using only existing data, potentially revolutionizing our approach to scientific research.

The Data Abundance Problem:

Andrew Mayne
And he talked about his biggest bottleneck was they're about to get all of this, you know, terabytes of data, but he doesn't have enough scientists to work on it. Doesn't have enough people to go through the data. And here we have these answers about the universe, whatever in front of us, and it's like a big data problem.
Andrew MayneOpenAIOpenAI Podcast Host

Sam's Particle Accelerator Vision:

Sam Altman
I've always joked that one thing we should do when we have enough money, when OpenAI has enough money, is just build a gigantic particle accelerator and solve high energy physics once and for all. Cause I think that'd be like a triumphant, wonderful thing.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Pure Intelligence Question:

Sam Altman
But I wonder what are the odds that a really, really smart AI could look at the data we currently have... With no more data, no bigger particle accelerator and just figure it out. It's not impossible.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
There's already a lot of data out there. There's a lot of smart people in the world, but we don't know how far intelligence can go. With no more experiments, how much more could we figure out?
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Hidden Discoveries Example:

Andrew shares how Ozempic was discovered in the early 1990s but rejected by drug companies, sitting unused for 25 years before becoming a life-changing treatment for obesity.

Current Scientific Applications:

Sam Altman
I suspect there's a lot of other examples that we'll find where maybe we already have existing drugs that we know do something good, but they're reusable in some other big way or with a couple of small modifications, we are very close to something great.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
It's been very heartening to hear from scientists using the even the current generation models for this kind of work.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [31:33-33:13]Youtube Icon

🧠 How Do Reasoning Models Actually Think?

Inside the Mind of AI: From Reflex to Reflection

Sam Altman explains the fundamental difference between standard AI responses and reasoning models, revealing how AI can now engage in human-like internal deliberation before responding.

The Evolution from GPT to Reasoning:

Sam Altman
So the GPT models can reason a little bit. And in fact, one of the one of the things that got people really excited in the early days of the GPT models was you could get better performance by telling the model, let's think step by step.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
And it would then just output text that was thinking step by step and get a better answer, which was sort of amazing that that worked at all.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Human Thinking Analogy:

Sam Altman
When you ask me something, a question, I, if it's a really easy question, I might just fire back like almost on reflex with the answer. But if it's a harder question, I might think in my head and have like my internal monologue go and say, well, I could do this or that or maybe maybe, you know, this will be clearer. I'm not sure about that. And I could like backtrack and retrace my steps.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Processing Time Revolution:

Sam Altman
The reasoning models are just pushing that much further.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

User Willingness to Wait:

Sam Altman
One thing I have been surprised by is people are surprisingly willing to wait for a great answer. Even if models are think for a while. All of my instincts have been, you know, the instant response is the thing that matters and users hate to wait.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
And for a lot of stuff, that's true. But for hard problems with a really good answer, people are quite willing to wait.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Time as a Quality Metric:

Andrew notes how some companies are using thinking time as a metric: "This model actually spent like fifteen minutes or thirty minutes or whatever length of time to think about a thing, which is a good metric, but it needs to actually give you the right answer."

Timestamp: [33:13-35:42]Youtube Icon

📱 What Will Replace the Smartphone Era?

Reimagining Computing for an AI-Native World

Sam Altman and Jony Ive's collaboration hints at revolutionary hardware designed specifically for AI interaction, moving beyond the limitations of devices created for a pre-AI world.

The Fundamental Problem:

Sam Altman
Computers, software and hardware, just the way we think of current computers, were designed for a world without AI. And now we're in, like, a very different world, and what you want out of hardware and software is changing quite rapidly.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

New Interaction Paradigms:

Sam Altman
You might want something that is way more aware of its environment, that has way more context in your life. You might wanna interact with it in a different way than, like, typing and looking at a screen.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Quality Commitment:

Sam Altman
We're gonna try to do something at like a crazy high level of quality and that that does not come fast.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
It will be worth the wait, hope, but it's gonna be a while.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Vision of AI-Integrated Computing:

Sam Altman
If you, like, really trusted an AI to understand all the context of your life and your question and make good judgments on your behalf where you could, like, have it sit in a meeting, listen to the whole meeting, know what it was, like, allowed to share with who and what it shouldn't share with anyone and, you know, kind of what your preferences would be.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
And then you ask it one question, you trust that it's gonna go do the right follow ups with the right people and do like you can then imagine a totally different kind of how you use a computer to get done what you want.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Public/Private Challenge:

Andrew Mayne
One of the things that made the phone so ubiquitous was the fact that I can be in public and look at the screen. I can be in private and have a phone call and talk to it. And I think that's one of the challenges for new devices is that trying to bridge that gap between what we use in public and private.
Andrew MayneOpenAIOpenAI Podcast Host

Flexible Use Cases:

Sam Altman
Phones are unbelievable things. I mean, they are really fantastic for a lot of reasons. And you can imagine one new device that you could use everywhere, but also like there's some things that I do do differently publicly and probably like at home.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Timestamp: [35:42-38:41]Youtube Icon

💼 What Career Advice Matters in an AI World?

Essential Skills for the Next Two Decades

Sam Altman provides practical guidance for navigating careers in an AI-transformed world, emphasizing both tactical skills and fundamental human capabilities that will remain valuable.

Tactical Advice:

Sam Altman
The obvious tactical stuff is probably what you'd expect me to say, like learn how to use AI tools.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Rapid Shift:

Sam Altman
It's funny how quickly the world went from telling, you know, the average 20 year old, 25 year old learn to program... To program it doesn't matter. Learn to use AI tools. I wonder what will be next, but of course, there will be something next.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Fundamental Skills for the Future:

Sam Altman
On the sort of like broader front, I believe that skills like resilience, adaptability, creativity, figuring out what other people want. I think these are all surprisingly learnable.
Sam AltmanOpenAIOpenAI | CEO & Co-founder
Sam Altman
And it's not as easy as say, like, go practice using ChatGPT, but it is doable. And those are the kind of skills that I think will pay off a lot in the next, you know, couple of decades.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

Universal Application:

Andrew Mayne
And would you say same thing for 45 year olds is just learn how to use it in your role now?
Andrew MayneOpenAIOpenAI Podcast Host
Sam Altman
Yeah. Probably.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Post-AGI Employment Reality:

Andrew Mayne
Whenever we have whatever your personal definition of AGI, will more people be working for OpenAI after then or before?
Andrew MayneOpenAIOpenAI Podcast Host
Sam Altman
More. The slightly longer answer with more than one word is that there will be more people, but each of them will do vastly more than what one person did, you know, in the pre AGI times.
Sam AltmanOpenAIOpenAI | CEO & Co-founder

The Technology Goal:

Andrew Mayne
Which is the goal of technology.
Andrew MayneOpenAIOpenAI Podcast Host

This reinforces that AI is meant to augment human capability rather than replace humans entirely.

Timestamp: [38:50-40:19]Youtube Icon

💎 Key Insights

Essential Insights:

  1. Pure Intelligence Potential - AI might solve major scientific problems using only existing data without new experiments, potentially unlocking discoveries hidden in plain sight like the 25-year delay of Ozempic
  2. Reasoning Revolution - Modern AI can engage in human-like internal deliberation, with users surprisingly willing to wait for thoughtful responses rather than demanding instant answers
  3. Hardware Paradigm Shift - Current devices were designed for a pre-AI world; the future requires fundamentally different interaction models with context-aware, environmentally integrated computing

Actionable Insights:

  • For Career Development: Focus on learning AI tools as the new fundamental skill, while developing resilience, adaptability, and creativity as enduring human advantages
  • For Professionals: Embrace longer AI processing times for complex problems rather than demanding immediate responses; quality thinking takes time
  • For Investors/Entrepreneurs: Consider how current computing paradigms may become obsolete as AI-native devices emerge with radically different interaction models

Timestamp: [31:33-40:19]Youtube Icon

📚 References

People Mentioned:

  • Sam Altman - OpenAI CEO discussing AI's scientific potential, reasoning models, and future hardware vision
  • Andrew Mayne - Former OpenAI engineer exploring AI applications and career implications
  • Jony Ive - Former Apple design chief collaborating with OpenAI on hardware development

Companies & Products:

  • OpenAI - AI company developing reasoning models and exploring hardware applications
  • Anthropic - Mentioned as using thinking time as a model performance metric
  • James Webb Space Telescope - Referenced for data analysis challenges in astronomy
  • Apple - Comparison point for hardware design philosophy and AirPods usage patterns

Technologies & Tools:

  • Reasoning Models - AI systems that engage in step-by-step internal deliberation before responding
  • Sora - OpenAI's video generation model with physics understanding capabilities
  • Deep Research - AI research assistant that processes questions over extended time periods
  • GPT Models - Earlier generation models with basic reasoning capabilities
  • Ozempic - Weight loss drug discovered in early 1990s but not developed until decades later

Concepts & Frameworks:

  • Pure Intelligence Discovery - The concept of making scientific breakthroughs using only existing data
  • Step-by-Step Reasoning - AI technique for improving response quality through explicit thinking processes
  • AI-Native Hardware Design - Computing devices designed specifically for AI interaction rather than traditional computing
  • Context-Aware Computing - Systems that understand environmental and personal context for better interactions
  • Human Augmentation vs. Replacement - Philosophy that AI should enhance rather than eliminate human capabilities

Timestamp: [31:33-40:19]Youtube Icon