
How To Get The Most Out Of Vibe Coding | Startup School
AI can't yet one-shot an entire productβbut with the rise of vibe coding, it's getting close. YC's Tom Blomfield has spent the last month building side projects with tools like Claude Code, Windsurf, and Aqua, seeing just how far you can push modern LLMs. From writing full-stack apps to debugging with a single paste of an error message, AI is becoming a legit collaborator in the dev process. This is a playbook for anyone who wants to get the most out of vibe coding and build faster.
Table of Contents
π What is vibe coding and why should you care?
Tom Blomfield, a YC partner, introduces the concept of "vibe coding" - a practice that's remarkably good and measurably improvable through tinkering and best practices. After a month of experimenting with AI coding tools on side projects, he's discovered that getting great results requires specific techniques.
The evolution mirrors prompt engineering from 1-2 years ago, where people discovered new techniques weekly and shared them on social media. The most effective vibe coding techniques are essentially the same ones professional software engineers use.
"We're trying to use these tools to get the best results" - Tom Blomfield
π How do you break out of AI coding loops?
When AI tools get stuck in debugging loops and can't implement or fix something, there's a simple but effective solution:
- Switch contexts: Leave your IDE and go directly to the LLM's website
- Paste your code: Copy the problematic code into the web UI
- Ask the same question: For some reason, the web interface often succeeds where the IDE failed
This technique can solve problems that seemed impossible just moments before, breaking through whatever limitation was causing the loop.
β‘ Should you use multiple AI coding tools simultaneously?
Yes! Running both Cursor and Windsurf on the same project creates a powerful workflow:
Cursor advantages:
- Faster execution
- Better for front-end work
- Great for full-stack linking (connecting front-end to back-end)
Windsurf advantages:
- Takes more time to think through problems
- Better for complex logic while you multitask
Parallel workflow strategy:
- Start a task in Windsurf (it thinks longer)
- Switch to Cursor for quick front-end updates
- Give both tools the same context and styling requirements
- Let them generate different iterations simultaneously
- Pick the best result from either tool
"Sometimes I'll load up both at the same time and have them both basically give me slightly different iterations of the same front end and I'll just pick which one I like better"
π£οΈ How should you think about AI as a programming tool?
Think of AI as a completely different kind of programming language. Instead of programming with code, you're programming with natural language. This fundamental shift requires a new approach:
Key principles:
- Provide comprehensive context: Give detailed information upfront
- Be explicit about requirements: Don't assume the AI understands implicit needs
- Treat it like a new programming paradigm: Different rules, different best practices
The quality of your results directly correlates with the quality and detail of the context you provide. Vague instructions lead to vague implementations.
π§ͺ What's the reverse-direction approach to vibe coding?
Start with test cases first, then let the AI generate code to meet those specifications:
The process:
- Handcraft test cases: Write these yourself, don't use LLMs
- Create guard rails: Establish strong rules the AI must follow
- Let AI generate freely: Within the constraints of your tests
- Verify with green flags: When tests pass, the job is done
- Focus on architecture: Review modularity, don't micromanage implementation
This approach prevents the AI from going off-track and ensures you get working, tested code that meets your exact specifications.
"Once I see those green flags on my test cases, the job is done. I don't need to micromanage my code bases"
ποΈ Why should you plan architecture before coding?
Spend an "unreasonable amount of time" in pure LLM conversations building out scope and architecture before moving to coding tools:
Critical planning phase:
- Define clear scope: Know exactly what you're building
- Design architecture: Map out how components will work together
- Set boundaries: Prevent the AI from making random decisions
- Understand the goal: Have crystal clear objectives
What happens without planning:
- AI makes arbitrary architectural decisions
- Code becomes inconsistent and hard to maintain
- Features don't integrate well together
- Debugging becomes exponentially harder
Only after this thorough planning should you "offload" the work to Cursor or other coding tools.
π³οΈ How do you recognize when AI falls into rabbit holes?
Monitor for these warning signs that indicate the LLM is struggling:
Red flags:
- Keeps regenerating similar code repeatedly
- Code looks "funky" or inconsistent
- Can't figure out the core problem
- You're constantly copy-pasting error messages
- Solutions seem to get worse, not better
Recovery strategy:
- Take a step back: Stop the current approach immediately
- Prompt for reflection: Ask the AI to examine why it's failing
- Analyze the root cause: Is it lack of context or just an unlucky run?
- Reset if needed: Start fresh rather than building on broken foundations
"If you notice that it just keeps regenerating code and it looks kind of funky, it's not really able to figure it out... something's gone awry and you should take a step back"
π οΈ Which tools should beginners vs experienced developers use?
For complete beginners (never written code):
- Replit: Easy visual interface, great for trying new UIs
- Lovable: Direct code implementation, perfect for quick prototyping
- Benefits: Many product managers and designers skip Figma mockups entirely
For experienced developers (even if rusty):
- Windsurf: Better for complex backend logic
- Cursor: Faster execution, great for full-stack work
- Claude Code: Advanced AI coding capabilities
Limitation of beginner tools: Lovable and similar tools struggle with precise backend modifications. They excel at UI changes but can bizarrely alter backend logic when you modify front-end elements.
π What's the step-by-step planning approach for vibe coding?
Don't dive straight into coding. Instead, follow this systematic approach:
Phase 1: Collaborative Planning
- Work with the LLM to create a comprehensive plan
- Save the plan as a markdown file in your project folder
- Keep referring back to it throughout development
Phase 2: Plan Refinement
- Delete or remove features you don't like
- Mark complex features as "won't do"
- Create an "ideas for later" section for out-of-scope items
- Be explicit about what's included vs excluded
Phase 3: Section-by-Section Implementation
- Tell the LLM: "Let's just do section two right now"
- Check that it works and run tests
- Commit to Git
- Have AI mark the section as complete in the plan
- Move to the next section
This prevents trying to "oneshot" entire products, which current models can't handle reliably for complex projects.
π How quickly are AI coding capabilities evolving?
The pace of improvement is so rapid that current advice might be obsolete within 2-3 months:
Current limitations:
- Models can't reliably oneshot entire complex products
- Piece-by-piece implementation is still necessary
- Each step needs testing and Git commits for safety
Future trajectory:
- Models are getting better "so quickly"
- Hard to predict where capabilities will be in the near future
- What's challenging today may be trivial tomorrow
This rapid evolution means staying flexible and continuously updating your vibe coding practices as new capabilities emerge.
"This advice might change in the next 2 or 3 months. The models are getting better so quickly that it's hard to say where we're going to be in the near future"
π Why is Git version control crucial for vibe coding?
Version control becomes your safety net when AI coding goes wrong:
Essential Git practices:
- Use Git religiously: Don't trust AI tool revert functions
- Start with clean slate: Begin each feature from a clean git state
- Commit working versions: Always have a known good state to return to
- Don't fear hard resets: Use
git reset --hard
when AI goes off-track
The accumulation problem: When you prompt AI multiple times to fix something, it tends to accumulate "layers and layers of bad code" rather than understanding root causes. Even if the 6th attempt works, the code quality suffers.
Clean solution strategy:
- Get the working solution from the AI
- Do a git reset to clean state
- Feed the clean solution back to AI
- Implement it fresh without the accumulated cruft
π Key Insights
- Vibe coding follows the same principles as professional software engineering, just with AI as your collaborative partner
- Multiple AI tools used simultaneously create better results than relying on just one
- Planning and architecture work upfront prevents AI from making arbitrary decisions that break your project
- Test-driven development with handcrafted test cases creates guardrails that keep AI focused and productive
- Git version control is essential because AI can accumulate bad code through multiple correction attempts
- Current AI can't reliably oneshot complex products, but the capabilities are improving rapidly
- Context and detailed instructions are everythingβtreat AI like a new programming language that requires explicit communication
π References
People:
- Tom Blomfield - YC partner sharing vibe coding expertise
Tools & Platforms:
- Replit - Visual interface tool for coding beginners
- Lovable - UI-focused development tool with visual interface
- Windsurf - AI coding tool for complex thinking tasks
- Cursor - Fast AI coding tool, particularly good for front-end work
- Claude Code - AI coding tool mentioned for experienced developers
- Figma - Design tool that many are bypassing in favor of direct coding
Companies & Organizations:
- Y Combinator (YC) - Startup accelerator running Spring Batch program
- YC Spring Batch - Current cohort of startups in the program
Concepts & Methodologies:
- Vibe Coding - AI-assisted coding approach using natural language
- Prompt Engineering - Technique for optimizing AI interactions
- Test-Driven Development - Starting with tests before writing code
- Version Control (Git) - Code management and versioning system
π§ͺ What type of tests should you write for AI-generated code?
LLMs are excellent at writing tests, but they often default to low-level unit tests. Instead, focus on high-level integration tests that simulate real user behavior:
Preferred testing approach:
- End-to-end testing: Simulate someone clicking through your site or app
- Feature verification: Ensure complete workflows work from start to finish
- User journey testing: Test actual user paths, not isolated functions
Why high-level tests matter:
- LLMs frequently make unnecessary changes to unrelated logic
- You ask it to fix one thing, it randomly changes something else
- Test suites catch these regressions early
- Prevents accumulation of broken code across features
Write these comprehensive integration tests before moving to the next feature to maintain code quality and catch AI-induced bugs.
π οΈ How can AI help with non-coding tasks in development?
AI isn't just for writing codeβit can handle the entire development ecosystem:
DevOps and Infrastructure:
- DNS configuration: Claude Sonnet 3.5 can configure DNS servers (a typically hated task)
- Hosting setup: Set up Heroku hosting via command line tools
- Server management: Acts as a DevOps engineer, accelerating progress 10x
Design and Assets:
- Favicon creation: ChatGPT can generate site icons and favicons
- Image processing: Claude can write scripts to resize images into multiple formats
- Cross-platform optimization: Handle different sizes needed across platforms
Complete workflow example:
- ChatGPT creates the favicon image
- Claude writes a throwaway script to resize it
- Script generates six different sizes and formats automatically
- Ready for deployment across all platforms
"The AI is now my designer as well"
π What's the most effective approach to fixing bugs with AI?
The simplest bug fix method is often the most powerful:
Step 1: Copy-paste error messages
- Take error from server logs or JavaScript console
- Paste directly into the LLM
- No explanation neededβthe error message is usually sufficient
- AI can identify and fix problems from the error alone
Why this works so well:
- Error messages contain precise technical details
- AI can pattern-match against known issues
- Faster than explaining what you think is wrong
- More accurate than human interpretation
Future evolution:
- Major coding tools will soon ingest errors automatically
- No more manual copy-pasting required
- LLMs will tail logs and inspect browser errors directly
- Humans won't need to be the "copy-paste machine"
"It's so powerful that pretty soon I actually expect all the major coding tools to be able to ingest these errors without humans having to copy paste"
π§ How should you handle complex bugs that resist simple fixes?
For stubborn bugs that don't respond to copy-paste error fixing:
Systematic debugging approach:
- Ask for analysis first: Have LLM think through 3-4 possible causes
- No immediate coding: Don't let AI jump straight to code changes
- Reset after failed attempts: Git reset after each unsuccessful fix
- Avoid accumulation: Don't make multiple attempts without resetting
Advanced debugging strategies:
- Add strategic logging: Logging is your friend when bugs are unclear
- Switch AI models: Different models succeed where others fail
- Try Claude Sonnet 3.5
- Switch to OpenAI models
- Test with Gemini
- Clean implementation: Once you find the bug source, reset everything and give specific fix instructions on clean codebase
Critical principle:
"Don't make multiple attempts at bug fixes without resetting because the LLM just adds more layers of crap"
π How do you make AI coding agents dramatically more effective?
Write comprehensive instructions for your AI coding toolsβthis can transform their effectiveness:
Where to put instructions:
- Cursor: cursor rules files
- Windsurf: windsurf rules files
- Claude: markdown instruction files
- Each tool has slightly different naming conventions
Scale of impact:
- Some founders write hundreds of lines of instructions
- Makes AI agents "way way way more effective"
- Huge performance difference between default and customized agents
What to include:
- Coding standards and conventions
- Project-specific requirements
- Preferred frameworks and patterns
- Error handling approaches
- Testing strategies
The investment in writing detailed instructions pays off exponentially in code quality and consistency.
π What's the best way to handle API documentation with AI?
Pointing AI agents at online documentation is still unreliable, but there's a better approach:
Current documentation challenges:
- Patchy results: Online web documentation access is inconsistent
- MCP servers: Some suggest using MCP servers, but it seems like overkill
- Connectivity issues: Web-based docs aren't always accessible to AI
Recommended solution:
- Download documentation locally: Get all API docs for your tech stack
- Create local subdirectory: Place docs in your working folder structure
- AI can access locally: LLM reads docs directly from your project
- Add instruction: Tell AI "go and read the docs before you implement this thing"
Results:
- Much more accurate implementations
- AI has complete context of API capabilities
- No dependency on web connectivity
- Faster reference and implementation
π How can you use AI as a coding teacher?
Transform AI from just a code generator into a personal coding instructor:
Learning workflow:
- Implement something: Use AI to build a feature or solve a problem
- Request explanations: Ask AI to walk through the implementation line by line
- Deep understanding: Get explanations of why specific approaches were chosen
- Technology exploration: Learn new frameworks and languages through guided explanation
Benefits over traditional learning:
- Personalized instruction: Explains your specific code, not generic examples
- Interactive learning: Ask follow-up questions about unclear concepts
- Better than Stack Overflow: More targeted than scrolling through forums
- Immediate feedback: Get explanations right when you need them
This approach is especially valuable for developers learning new technologies or those less familiar with specific coding languages.
π Key Insights
- High-level integration tests are more valuable than unit tests when working with AI because they catch unintended changes across your codebase
- AI excels at non-coding development tasks like DevOps, design, and asset creationβtreat it as your full development team
- Copy-pasting error messages directly to AI is often the fastest path to bug resolution, no explanation required
- Different AI models have different strengths; switching models can solve bugs that seemed impossible
- Writing detailed instruction files for your AI tools creates exponentially better results than using default settings
- Local documentation access is more reliable than having AI fetch docs from the web
- AI makes an excellent personalized coding teacher that can explain implementations line by line
π References
AI Models & Tools:
- Claude Sonnet 3.7 - Used for DNS configuration, hosting setup, and code implementation
- ChatGPT - Used for favicon image creation
- Gemini - Best for whole codebase indexing and implementation planning
- GPT 4.1 - Recently tested model with mixed results
- Aqua - YC company providing voice-to-code transcription services
- Windsurf - AI coding tool mentioned for switching between platforms
- Claude Code - Alternative coding tool referenced
Technologies & Frameworks:
- Ruby on Rails - 20-year-old framework with excellent AI performance
- Rust - Programming language with less training data available
- Elixir - Programming language with limited online training examples
- Heroku - Cloud hosting platform configured via command line
- DNS Servers - Network infrastructure configured by AI
- MCP Server - Suggested method for accessing documentation
Development Concepts:
- Favicon - Browser icon created and resized through AI workflow
- JavaScript Console - Browser debugging tool for error identification
- Server Log Files - Source of error messages for debugging
- Stack Overflow - Traditional coding help resource being replaced by AI tutoring
ποΈ How do you implement complex functionality that's beyond normal AI capability?
When facing features more complex than you'd normally trust AI to implement, use the standalone project approach:
Step-by-step implementation strategy:
- Create isolated environment: Build the feature in a totally clean codebase
- Get reference implementation: Either build a small working version or download one from GitHub
- Point AI to the reference: Show the LLM the working implementation
- Guide reimplementation: Have AI follow the reference while adapting it to your larger codebase
Why this works:
- Removes complications from your existing project
- AI can focus on the core functionality without distractions
- Reference implementations provide clear patterns to follow
- Much higher success rate than trying to implement complex features from scratch
"It actually works surprisingly well"
π¦ Why are small files and modular architecture crucial for AI coding?
Modular architecture benefits both human developers and AI systems:
Benefits of small, modular files:
- Clear boundaries: LLMs work better with defined API boundaries
- Consistent interfaces: External APIs remain stable while internals can change
- Reduced complexity: Easier to understand impact of changes
- Better testing: Tests can verify external interface compliance
Architectural shift prediction:
- Movement toward service-based architecture
- Clear API boundaries that AI can work within
- Away from massive monolithic repos with interdependencies
Problems with large codebases:
- Hard for both humans and LLMs to understand
- Unclear if changes in one place impact other parts
- Massive interdependencies create confusion
- Difficult to maintain and debug
Modern approach:
- Consistent external APIs
- Change internals freely as long as interface and tests pass
- Modular design enables confident refactoring
π€οΈ Which technology stacks work best with current AI coding tools?
Choose mature frameworks with well-established conventions for best AI performance:
Why Ruby on Rails excels with AI:
- 20-year-old framework with mature conventions
- Consistent patterns: Most Rails codebases look very similar
- Clear conventions: Obvious where functionality should live
- "Rails way": Well-defined approaches for common tasks
- Abundant training data: Tons of high-quality, consistent Rails code online
Performance comparison:
- Rails: Tom was "blown away" by AI performance
- Less successful languages: Rust, Elixir have less online training data
- Training data correlation: More consistent examples = better AI results
Selection criteria:
- Choose frameworks with established conventions
- Look for languages with extensive online code examples
- Prioritize mature ecosystems over cutting-edge technologies
- Consider the volume and quality of available training data
Future outlook: Training data availability might change rapidly, potentially improving AI performance with newer languages.
π· How can visual inputs enhance your AI coding workflow?
Screenshots and visual communication unlock new possibilities for AI interaction:
Screenshot applications:
- Bug demonstration: Show UI implementation problems visually
- Design inspiration: Pull in designs from other sites you want to emulate
- Visual requirements: Communicate layout and design needs clearly
- Copy-paste functionality: Most modern coding agents support direct screenshot input
Voice input advantages:
- Speed improvement: Input at 140 words per minute (double typing speed)
- Error tolerance: AI handles grammar and punctuation mistakes well
- Natural communication: Talk through problems instead of typing
- Seamless integration: Works across different tools (Windsurf, Claude Code)
Practical example: Tom wrote his entire talk using Aqua's voice transcription, demonstrating the effectiveness of voice-to-AI workflows for complex content creation.
Tool spotlight:
- Aqua: YC company providing voice transcription for AI tools
- Works with multiple coding environments
- Enables natural language programming at speaking speed
π When and how should you refactor AI-generated code?
Refactor frequently with confidence once you have proper testing infrastructure:
When to refactor:
- After code works: Get functionality working first
- Tests implemented: Ensure comprehensive test coverage exists
- Regular intervals: Don't let technical debt accumulate
How to refactor safely:
- Rely on tests: Tests catch any regressions during refactoring
- Ask AI for analysis: Have LLM identify repetitive or problematic code sections
- Target candidates: Focus on parts that seem good for refactoring
- Maintain modularity: Keep files small and focused
Professional developer principles:
- Avoid massive files: Don't let files grow to thousands of lines
- Stay modular: Keep components small and understandable
- Regular maintenance: Continuous improvement, not periodic overhauls
Benefits for AI workflow:
- Easier for both humans and LLMs to understand code structure
- Clearer boundaries make AI suggestions more accurate
- Better maintainability as projects grow in complexity
π How do you stay current with rapidly evolving AI coding capabilities?
The AI coding landscape changes weekly, requiring continuous experimentation:
Experimentation approach:
- Try every new model release: Test each one across different scenarios
- Compare performance: Different models excel at different tasks
- Weekly evaluation: State-of-the-art changes that frequently
Current model specializations (as of video recording):
- Gemini: Best for whole codebase indexing and implementation planning
- Sonnet 3.5: Leading contender for actually implementing code changes
- GPT-4.1: Still developing (came back with too many questions, implementation errors)
Specific use case testing:
- Debugging capabilities: Which model solves bugs most effectively
- Long-term planning: Which handles complex project architecture
- Feature implementation: Which writes the best functional code
- Refactoring skills: Which improves existing code most effectively
Continuous learning mindset:
- Try models that didn't work well again next week
- Capabilities change rapidly
- What fails today might excel tomorrow
- Share discoveries with the community
"I'll try it again next week and I'm sure things will have changed again"
π Key Insights
- Complex functionality is best tackled by building standalone reference implementations first, then having AI adapt them to your main codebase
- Modular architecture with clear API boundaries makes AI more effective and reduces the risk of unintended changes across your codebase
- Mature frameworks with established conventions (like Rails) perform significantly better with AI than newer languages with limited training data
- Visual inputs (screenshots) and voice commands can dramatically speed up communication with AI coding tools
- Frequent refactoring with comprehensive test coverage keeps AI-generated code maintainable and high-quality
- The AI coding landscape evolves weeklyβcontinuous experimentation with new models is essential to stay current
- Different AI models excel at different tasks; the best approach is to test each model across various scenarios
π References
Programming Frameworks:
- Ruby on Rails - 20-year-old framework with excellent AI performance due to established conventions
- Rust - Language with less AI success due to limited training data
- Elixir - Another language with fewer online examples for AI training
AI Models:
- Gemini - Best for whole codebase indexing and implementation planning
- Sonnet 3.5 - Leading contender for implementing code changes
- GPT-4.1 - Newer model still developing (had issues with too many questions and implementation errors)
Tools and Companies:
- Aqua - YC company providing voice transcription for AI coding tools at 140 WPM
- Windsurf - AI coding tool mentioned for voice integration
- Claude Code - AI coding tool used with voice commands
- GitHub - Platform for finding reference implementations
Architecture Concepts:
- Service-based architecture - Modular approach with clear API boundaries
- Monolithic repos - Large codebases with massive interdependencies (problematic for AI)
- API boundaries - Clear interfaces that help AI work within defined constraints
- Refactoring - Code improvement process that benefits from comprehensive testing