undefined - Transcend’s Kate Parker on putting data back into the hands of users in an AI-driven world

Transcend’s Kate Parker on putting data back into the hands of users in an AI-driven world

While we take a quick mid-season break, we're re-sharing some of our favorite episodes from previous seasons. This week, we're revisiting our conversation with Transcend President Kate Parker. Recent developments in artificial intelligence have sparked an outcry for control over personal data. While regulators, politicians, and the business community have been thinking about how to improve data privacy, there is still much more work to do. Kate Parker, Transcend’s President, will discuss...

April 29, 202540:04

Table of Contents

00:19-09:59
10:05-19:58
20:04-29:56
30:03-40:04

🎙️ Introduction

This episode from Spotlight On features a conversation between Transcend President Kate Parker and Accel partner Vas Natarajan, focusing on data privacy, AI governance, and how Transcend is helping companies manage data responsibly.

The episode introduction notes that while they take a quick mid-season break, they're sharing favorite episodes from previous seasons, highlighting the increasingly important topics of data privacy and governance in an AI-driven world.

"Welcome to Spotlight On, a podcast about how companies are built from the people doing the building. While we take a quick mid-season break, we wanted to share some of our favorite previous episodes with you. We loved this conversation between Transcend President Kate Parker and Accel partner Vas Natarajan and hope you enjoy it too."

Timestamp: [00:00-00:19] Youtube Icon

🤖 AI: Top of Mind for Everyone

The conversation begins by establishing AI as a critical focus across multiple domains - from portfolio companies to buyers, regulators and politicians. Kate and Vas discuss how AI has the potential to reorganize society in profound ways, creating both opportunities and concerns.

"I think AI has the ability to reorganize society in ways that maybe we can't even predict right now," notes Vas Natarajan, highlighting the significant impact of this technology.

Kate reinforces this perspective, pointing out that many AI risks aren't entirely new: "Many of the risks that exist with AI have already existed in the business industry for many years, so things like processing personal data, having the right type of risk assessment."

She explains that companies are now dealing with traditional risk assessments while adding components related to generative AI: "It's interesting to watch companies sort of go through this classic version of their risk assessments and then this sort of additional component as it relates to the generative outputs, the inputs and outputs of the model."

Timestamp: [00:19-02:11] Youtube Icon

🧠 Transcend's Origin Story

Vas asks Kate to share Transcend's fascinating origin story, which began with the personal experiences of founders Ben and Mike, two Harvard computer scientists who were trying to access their own personal data.

"It blew me away when I met Ben and Mike," Kate shares. "They were two computer scientists out of Harvard who went through a personal journey to try to figure out their own data. They were trying to hack sort of their efficiency, their time productivity, what music they listened to, how many classes they were going to—all of that stuff. They wanted all of that data together."

Kate explains how the founders' personal challenge coincided with the implementation of Europe's GDPR: "That was right at the time that Europe's GDPR came into effect, the data privacy regulation that said you have to be able to give your consumers a way of taking back their data."

The founders recognized two potential business approaches to data privacy: a surface-level approach focused on policies and banners, or a deeper infrastructure approach addressing data at the code layer. As computer scientists, they chose the latter path.

"Ben and Mike as two computer scientists said 'Well, that's the more interesting path,' particularly if you look down fields which they already were in terms of artificial intelligence and things like that, of just how valuable our personal data is."

Timestamp: [02:11-04:14] Youtube Icon

🏆 Early Customer Success Stories

Kate discusses Transcend's initial customer base, highlighting how consumer-focused companies like Robinhood and Patreon were early adopters who recognized privacy as a core brand value.

"Robinhood, Patreon as an example. So folks who have just incredible user bases, really engaged consumers and wanting to be able to have that level of brand trust, whether it's the creators in the instance of Patreon or the actual users who are doing financial transactions with Robinhood."

Vas builds on this, noting how these companies viewed privacy not just as a compliance requirement but as a strategic advantage: "It was fascinating to watch them view privacy as a core tenant of their brand values. The product that they wanted to build and serve to customers needed to have privacy as a first pillar, as a first class citizen."

He explains the dual business benefits these companies recognized: "They viewed that both as a benefit from a CAC standpoint—so hey, it's just going to be easier for us to acquire users if people trust our brand—but also from a just lifetime value standpoint."

"People will be more likely to engage with our service, contribute to our service in the case of Robin Hood, if they know that we are honoring their trade data. They'll be more willing to put money into their accounts and trade more frequently, and that's going to drive lifetime value."

Timestamp: [04:14-06:08] Youtube Icon

🏢 Evolving to Serve Enterprise Customers

Kate explains how Transcend has evolved from serving consumer applications to now working with Fortune 100 companies, with a focus on enabling strict compliance at the code layer.

"When we fast forward to today, we now serve not only the consumer apps, we serve Fortune 100 companies. And what we're seeing in terms of value proposition is really number one, strictest compliance. Companies, particularly our largest enterprise companies, are just saying we need to make sure that we have these dials turned as tightly as we can."

She emphasizes that policy-level compliance isn't enough for sophisticated organizations: "You can only go so far when you're using policies and sort of written statements. You've got to get in at the code layer and just tighten those dials in terms of how you're handling compliance."

Timestamp: [06:08-06:39] Youtube Icon

🔍 The "Confetti Gun" Problem

Vas asks Kate to explain how companies were managing privacy requirements before Transcend. Kate uses a vivid metaphor to describe the challenge of handling personal data across multiple systems.

"In the early days it was a lot of web forms and shoulder tapping. They would basically set up a system within their company to say when a person comes and requests their data, I'm going to need you John, you Sally, you Sue, whoever else to handle all of these things."

Kate elaborates on the manual process: "I'm going to need you to pull the data from your systems if you're the marketing team. I'm going to need you to go in and search them up and find them. We're going to put all of that data back together."

"We like saying that personal data within most companies goes off like a confetti gun. It just goes into every SaaS system, every data warehouse, and then you got to pull all that confetti back together and hand it back to the user."

Vas reinforces this metaphor, noting how user data is scattered across numerous tools and systems within organizations: "I sign up at Robin Hood, their email service provider has my data, their just core production databases have my data, their re-engagement tools have my data... Voss is known across many different SaaS tools inside of Robin Hood and to govern that... that's the confetti gun that needs to be controlled."

Timestamp: [06:39-07:56] Youtube Icon

🛠️ Transcend's Technical Solution

Kate explains how Transcend provides a comprehensive technical solution to the data governance challenge, replacing manual processes with automated, code-level infrastructure.

"Now with Transcend and infrastructure components being able to do that at the code layer, mapping all your data systems, knowing where all your content is, being able to pull all of that back into a single unified view and then actually being able to govern that."

She emphasizes Transcend's core value proposition: "Our central moment has always been about making it easy for companies to execute these tasks, these data processing tasks."

Kate then pivots to how this approach applies to artificial intelligence governance: "As we look to things like artificial intelligence, a lot of our customers are pulling us in because many of the same things hold true at the governance level."

She points out that despite ongoing regulatory development, AI is already subject to existing regulations: "Don't get me wrong, there's a lot that needs to be figured out on the regulatory field of where some of this will go, but AI is already regulated. We've got 13 US privacy comprehensive pieces of legislation that govern where you can put sensitive data. We've got Europe's GDPR which already focuses on looking at automated decision-making, having the right risk assessment set up, processing the data components."

Timestamp: [07:56-09:24] Youtube Icon

🌐 Global Privacy Orchestration

Vas summarizes the value of Transcend's approach, highlighting how it enables companies to handle data governance across different global jurisdictions with different regulatory requirements.

"This is what I love about Transcend. You have companies where the confetti gun has gone off, they need to almost put the confetti gun back into the cannon in some ways, and so Transcend gives them that single choke point such that we know where the data is."

He emphasizes the global applicability of Transcend's solution: "Regardless of where your end customer is, if they're in California, if they're in Europe, if they're in Japan, if they're in Brazil, if they're in India, any of these jurisdictions where there are going to be unique bespoke pieces of regulation. Hey, I can orchestrate that data, I can govern that data based on where my user is."

Timestamp: [09:24-09:59] Youtube Icon

💎 Key Insights

  • AI has the potential to "reorganize society in ways that maybe we can't even predict right now," making governance and regulation crucial topics
  • Many AI risks aren't new - they build upon existing data privacy and processing challenges that businesses have faced for years
  • Transcend began when founders Ben and Mike, Harvard computer scientists, struggled to access their own personal data
  • Early adopters like Robinhood and Patreon saw privacy as a core brand value that improved both customer acquisition and lifetime value
  • Personal data within companies "goes off like a confetti gun" into various systems, creating major governance challenges
  • Before automated solutions, companies relied on manual "shoulder tapping" processes to handle data privacy requests
  • Transcend provides code-level infrastructure that maps data systems and enables unified governance
  • Despite ongoing regulatory development, AI is already governed by existing data privacy regulations like GDPR
  • Transcend enables global privacy orchestration across different jurisdictions, each with unique regulatory requirements

Timestamp: [00:00-09:59] Youtube Icon

📚 References

Companies:

  • Transcend - Privacy infrastructure company founded by Ben and Mike, now serving Fortune 100 companies
  • Robinhood - Early Transcend customer, financial services company that saw privacy as a core brand value
  • Patreon - Early Transcend customer, creator platform that prioritized privacy for brand trust

People:

  • Kate Parker - President of Transcend, featured guest in the episode
  • Vas Natarajan - Accel partner, host of the episode
  • Ben and Mike - Harvard computer scientists who founded Transcend after struggling to access their own data

Regulations:

  • GDPR - European data privacy regulation that influenced Transcend's founding
  • CCPA - California Consumer Privacy Act, mentioned as part of US privacy legislation
  • AI Act - Brewing European regulation mentioned in the context of AI governance

Concepts:

  • "Confetti Gun" - Kate's metaphor for how personal data scatters across numerous systems within companies
  • Code-layer Governance - Transcend's approach to managing data privacy at the infrastructure level rather than just with policies
  • Automated Decision-making - A specific focus of existing regulations like GDPR that already apply to AI systems

Timestamp: [00:19-09:59] Youtube Icon

🔄 Single Point Control for Data Governance

Vas continues his thought about Transcend's value proposition, explaining how it provides not just data tracking but also policy implementation capabilities.

"Now I have Transcend that gives me both that single choke point but also a single on-ramp to apply policy to Voss's data," he explains, highlighting the dual benefits of the platform.

Vas then makes a connection to the broader theme of AI regulation, suggesting that current data governance frameworks are laying the groundwork for future AI governance: "The whole point of the season is to talk about AI in some ways, how AI will be regulated... it's almost going to be a natural evolution of how data has been regulated over the last 5 to 10 years."

He articulates a significant insight about regulatory progression: "GDPR, CCPA and all of these privacy regulations acts were just the on-ramp to how AI is ultimately going to be governed and we're starting to see that take shape now."

Timestamp: [10:05-10:46] Youtube Icon

🔄 Proactive AI Governance

Kate describes how existing customers are approaching Transcend for help with their generative AI initiatives, demonstrating how data governance demands are evolving in the post-ChatGPT era.

"For many of our customers, they came to us very proactively this year and said, 'We have new plans for generative AI in particular. You are our trusted data governance infrastructure. We need your help actually managing the data flowing into the LLM and flowing out of the LLM.'"

She provides concrete examples of how companies are implementing AI internally: "Chief information officers, a lot of folks are launching productivity tools internally for their employee base so that they can access information more quickly, whether that's your customer service agent looking to quickly pull down a call script or take some type of action."

Kate emphasizes the importance of controlling both inputs and outputs: "When you think about the LLM, you need to actually be very careful of what information is going into that LLM and then what information that internal employee can actually take out of it."

She notes how this aligns with Transcend's core mission: "In many ways for us, that's just the core infrastructure problem that we've always been focused on: what is the data, where is it going, who has access to it, what should sort of be done with it, and how should it flow throughout the organization."

Timestamp: [10:46-12:04] Youtube Icon

🖼️ The Personal Data Dilemma

Vas shares a personal concern about how his own data might be used by AI systems, highlighting the real-world implications of AI data governance.

"When I first put my photo into Dall-E, I was wondering this—if someone comes in behind me and prompts Dall-E or Midjourney for 'okay looking Indian dude,' I don't want my photo to come up."

He elaborates on the broader concern: "We're submitting a lot, whether we know it or not. We're submitting a lot of personal data into these systems, and those systems are reading that data and they're creating probabilistic outputs that might approximate the data that we're submitting."

Vas emphasizes that this concern extends beyond companies to individual users: "That's always the ground swell that is really going to activate this category for us—do customers really care about this? And I think that's absolutely yes."

Kate affirms this perspective by sharing an example of a major AI company's approach: "We serve one of arguably the largest AI companies in the world, and for them they wanted to be sure that their users understood all of the general and existing regulation on the books in terms of deleting their data, accessing their data."

She notes that AI companies feel particular pressure around data governance: "I think that particularly for AI companies is a very real thing. We talk to companies all the time that say, 'We have a great consumer-facing product, we plan on using AI, we need to make sure that all of the regulatory screws as it relates to them feeling ownership over their data and the ability to reclaim it... is set up, ready to go.'"

Timestamp: [12:04-14:02] Youtube Icon

🚦 Enterprise AI Adoption Challenges

Vas asks Kate about the maturity curve of AI implementation in enterprise contexts and what needs to happen for the market to develop. Kate provides insights focused specifically on generative AI adoption challenges.

"I think it's very specific to generative AI. I'm going to kind of put classic AI to the side because I think there are a lot of enterprise companies that have been doing classic AI for years," Kate explains, distinguishing between established AI practices and newer generative approaches.

She describes the current enterprise landscape: "On the generative front, what we see and who we talk to in the Fortune 100, there's a ton of excitement to do something. They have product plans, whether it's chatbots that look at product recommendations and how they can speed that up, sort of reallocating the efficiency of their customer success team."

Despite this excitement, Kate identifies a critical barrier: "They are incredibly nervous about the lack of brakes and guardrails, and that is causing them to just take a beat to figure out sort of how are we going to handle some of those thorny issues."

She breaks down the specific concerns: "If you just go down to the granular level, it's how do we make sure we don't put the wrong data into the model, and how do we make sure, particularly if it's an end-user product, that that model doesn't spit out an output that we don't want it to."

Timestamp: [14:02-15:59] Youtube Icon

🛡️ Trust and Safety Concerns

Kate delves deeper into the specific concerns enterprises have about implementing generative AI, highlighting both trust and safety issues as well as data privacy considerations.

"It ranges from trust and safety concerns—how do we make sure that the output is brand appropriate, is something that has the right ethics, it sort of evolves to our brand morals and our brand values—so all the way on the trust and safety side."

She then addresses the privacy dimension: "But then obviously the privacy concerns, IP concerns, just sort of these very standard concerns. How do we make sure that our chatbot doesn't leak sensitive data out to an end user? How do we make sure our chatbot doesn't put out information that is actually just at its standard core not something that it should be doing?"

Vas notes a significant shift in how companies are approaching these issues: "These were concepts that a lot of our companies were layering on after the fact. You take a Robin Hood or GoFundMe or Patreon, it's like, 'Let's go build a big service, let's get a bunch of users, and then we'll figure some of this stuff after.'"

He contrasts this with the current approach to AI: "But now as they're thinking about AI, it has to be foundational to them. It's almost like compute—we can't even start to build these services without making sure that we have our privacy posture in place."

Kate agrees emphatically: "I think that has been such a transformational moment for the industry."

Timestamp: [15:59-17:34] Youtube Icon

🚀 Privacy-First Product Development

Kate describes a fundamental shift in how startups are approaching privacy, particularly those building AI products, moving from an afterthought to a foundational component.

"If we just look back 3 years ago, there were very few startups who were coming to us below the regulation thresholds and saying 'I really want to get ahead of privacy.' That was a small cohort... but by and large folks were trying to figure it out after the fact, after they got big enough, after they started to need to be regulated."

She contrasts this with the current approach of AI companies: "For AI companies, it has been completely different. We are meeting with folks daily who say, 'We are just getting going, this is our mission on AI governance, we believe that we need to have this stuff handled.'"

Kate explains the dual business drivers behind this shift: "On two fronts, if they're consumer, they're not going to get the trust of consumers using their products. And if they're business-focused, they're not going to be able to close those contracts because the businesses that they're looking to actually interact with and close contracts with also want to know where their data is going."

She notes how this creates a stronger business case for privacy: "It's got this really interesting business tension to it... businesses have a little bit more skin in the game right now, which has just been really interesting to see."

Timestamp: [17:34-18:53] Youtube Icon

💼 Go-to-Market Advice for AI Startups

Vas asks Kate for practical advice for AI-native founders trying to gain traction with enterprises. Kate emphasizes the importance of demonstrating clear value beyond the hype.

"I think the biggest thing is being able to show exactly what the value is. There is a lot of hype in the AI industry, there's a lot of excitement about different use cases."

Drawing from Transcend's experience, she stresses the importance of proven capabilities: "Even just sort of drawing on our own journey, being able to demonstrate very clearly not only what you're saying but what you can actually do is incredibly important."

Kate explains why this is particularly crucial in the current market environment: "People are trying to kind of cut the wheat from the chaff very quickly, and so being able to kind of pull that apart I think has been super interesting."

Timestamp: [18:53-19:58] Youtube Icon

💎 Key Insights

  • Existing data privacy regulations like GDPR and CCPA are functioning as "the on-ramp" to how AI will ultimately be governed
  • Companies are proactively seeking help with generative AI governance, specifically controlling both data inputs to LLMs and outputs to users
  • Personal data submitted to AI systems creates downstream privacy concerns as models can produce outputs that approximate or reveal that data
  • Fortune 100 companies show enthusiasm for generative AI applications but are hesitant due to "the lack of brakes and guardrails"
  • Enterprise concerns about AI include both trust and safety (ensuring brand-appropriate outputs) and data privacy (preventing sensitive data leakage)
  • There's been a "transformational moment" in how privacy is approached—from an afterthought to a foundational requirement, especially for AI products
  • AI startups face a dual imperative: building consumer trust and satisfying business customers' data governance requirements
  • In a market filled with AI hype, founders must clearly demonstrate actual value and capabilities to stand out

Timestamp: [10:05-19:58] Youtube Icon

📚 References

Companies/Products:

  • Transcend - Data governance infrastructure company discussed throughout the segment
  • Dall-E - AI image generation tool mentioned by Vas in his personal anecdote
  • Midjourney - AI image generation tool mentioned alongside Dall-E
  • Robin Hood - Financial services company mentioned as an example of adding governance after growth
  • GoFundMe - Crowdfunding platform mentioned as another example of retrospective governance
  • Patreon - Creator platform mentioned alongside other examples of companies that added governance later

Technologies:

  • LLM (Large Language Model) - AI system type that companies are implementing with governance concerns
  • Generative AI - Category of AI that Kate distinguishes from "classic AI" in enterprise adoption
  • Chatbots - Specific application of AI that enterprises are exploring for customer service

Regulations:

  • GDPR - European data privacy regulation mentioned as precursor to AI governance
  • CCPA - California Consumer Privacy Act mentioned alongside GDPR

Concepts:

  • "Single choke point" - Vas's description of Transcend's control mechanism for data
  • "Brakes and guardrails" - Kate's metaphor for the controls enterprises want before implementing AI
  • Trust and Safety - Category of concerns related to brand-appropriate AI outputs
  • "Cut the wheat from the chaff" - Expression Kate uses to describe how businesses evaluate AI vendors

Timestamp: [10:05-19:58] Youtube Icon

🔍 The AI Risk Landscape

Kate turns the tables and asks Vas about his perspective on risk assessment for AI startups targeting enterprise customers, based on his broad exposure to the startup ecosystem.

"I obviously talk to a lot of different startups and sort of get a pretty big view of how folks are using AI. I'd sort of be curious your take on where you think risk is kind of flowing into that and how they're actually thinking about that in terms of going after enterprise customer. Do you think they know that it exists yet or do you think they still are in that learning curve?"

Vas responds by sharing his extensive exposure to AI startups: "We probably meet, I don't know, a couple thousand startups every year. The vast majority of them today have some AI native component."

He identifies a fundamental tension in how AI companies are positioning themselves: "There's an interesting tension for how AI companies are pitching themselves right now. One is they're trying to sell efficiency. They're saying, 'Hey we can do so much more work per unit of input than you could have done before.'"

Vas highlights a key implication: "A lot of what that implies is they're actually taking humans out of the loop. You might have a customer success team or a customer support team of 50, and hey, by being AI enabled or implementing our product, maybe you can get the same output with only a team of 10 or 15."

Timestamp: [20:04-21:08] Youtube Icon

💰 Reimagining AI Business Models

Vas explores how AI is forcing companies to rethink fundamental business model assumptions, particularly around pricing strategies.

"Products in the space historically have been priced on a per seat basis, and so you have all of these companies that grew up in the SaaS era of wanting to sell more and more seats, but now you have a product that is arguably taking seats out of the equation."

He describes the resulting business model challenge: "You have a lot of companies that are just having to reimagine pricing and packaging and what that ROI story is, how they connect to some value access where you can get in, you can get your foot in the door, but then you can hopefully extract more economics from these customers over time."

When Kate asks for advice to founders navigating this shift, Vas emphasizes the importance of adaptability: "I think the thing that we can do as investors is just not be prescriptive because I think we're in a new world and these technologies are implying entirely new business models, they're implying entirely new ways of company building."

"I think we're going to have to go back to first principles and really think from the ground up. Okay, how do we build and scale these companies? How do we go to market? How do we package and price? How do we hire? How are we going to fundraise behind these companies going forward? All of that is going to be turned on its head."

Timestamp: [21:08-23:56] Youtube Icon

🔒 First-Class Principles

Vas concludes his response by emphasizing the importance of prioritizing security and governance in AI-native companies.

"I think the thing that I'm really preaching to the companies I work with is security, privacy, governance, trust, safety—these have to be first class principles, not just from a marketing standpoint, but really how we build and scale these products."

He explains why this shift is critical: "I think this concept of personal data security has jumped the shark. It's become a consumer and end user expectation that every company is going to have to build against."

Kate enthusiastically agrees with this assessment, affirming the elevation of privacy from a checkbox to a core requirement.

Timestamp: [23:56-24:26] Youtube Icon

📜 Current State of AI Regulation

Kate provides a comprehensive overview of the current regulatory landscape for AI, emphasizing that significant regulation already exists despite the perception of a regulatory vacuum.

"AI is regulated. There are existing regulations on the books. In the US, we have 13 state comprehensive privacy regulations. Obviously, Europe has been the high watermark for years with things like GDPR."

She highlights how existing privacy frameworks already address AI: "GDPR is interesting because it's also included for many years automated decision-making, which is AI, and so being able to handle the risk assessments and looking at that."

Kate also explains how US regulatory bodies are already active in this space: "You layer on top of that in the US things like the FTC and the FCC. They have already signaled quite clearly that they believe they have everything they need in order to make sure regulation around fraudulent practices, deceptive marketing, making sure that companies are really responsible for the AI outputs, that they're not misleading consumers."

She notes how regulators are actively using existing tools: "Regulators are already using the tools at their disposal. California, for example, has a privacy regulation act. They're already sort of turning the screws on that to make sure that they have things like opting out of automated decision-making, which would be AI-related specificity."

Timestamp: [24:26-25:58] Youtube Icon

🌍 The European Approach to AI Regulation

Kate discusses how Europe is leading the way on developing a comprehensive, risk-based framework specifically for AI regulation.

"Most of us now from the governance perspective, we're watching Europe. Very similar to GDPR, Europe is signaling that they will likely do some type of broad AI act."

She explains the likely structure of this regulation: "It is very likely that it will be a risk-based regulation, which means that they will likely categorize businesses into risk frameworks and then have a sliding scale. The top of the scale, those businesses will be banned because they will be considered to be too detrimental for the risk, and then all the way down there will sort of be a sliding scale of regulation depending on the impact."

Kate advises companies to prepare by understanding where they fit in this framework: "You need to be thinking about where you fit into this risk framework. Are you ready to sort of say where your company falls in its usage of AI on that sort of scale of potential harm? Because that's sort of the signals that we're getting of where the world is moving toward."

Timestamp: [25:58-27:16] Youtube Icon

🚨 From Paper Tiger to Regulatory Reality

Kate describes a significant shift in the enforcement of privacy regulations, moving from targeting only the largest tech companies to broader enforcement across industries.

"I will be the first to admit that privacy regulation for many years was a complete paper tiger. If you were not a big tech company, you were just hiding underneath sort of the standard of the industry and just saying, 'We don't really have everything well put together, but the only people the regulators are going after are the folks with the huge big tech logos.'"

She notes the historical focus on major tech companies: "Google, Facebook, those are the folks who are just getting hammered with fines, hundreds of millions of dollars of fines."

Kate then describes the recent shift: "What we are seeing now, and I think to your earlier point, I do think AI has sort of brought on another wave of this enthusiasm, and it's consumer-driven for sure. People are freaked out about how their data is going to be used. We are watching regulators really get much more specific with folks."

She provides a concrete example of expanding enforcement: "We had a healthcare system out of Chicago as an example that was fined $12 million for a pixel tracking issue where they were actually passing data to their adtech network even after folks had consented out."

Kate emphasizes the broader implications: "It's certainly becoming a far more aggressive regulatory environment. The fines are certainly increasing. The exposure across industries is increasing. It's no longer a problem of big tech."

Timestamp: [27:16-28:42] Youtube Icon

🔍 Tracking Compliance Technology

Kate provides insight into one of Transcend's core products focused on ensuring that companies actually honor user consent choices regarding data sharing.

"One of the things that we do is on the tracking technology front, to make sure if you've consented into something—so you know if you go to a company and you say, 'I do not want you to share or sell my data downstream to the Facebooks of the world, the Googles of the world'—you actually expect that processing sort of flow to actually happen."

She explains the technical challenges companies face: "For many companies, that's challenging if they don't have the right type of infrastructure product."

Kate describes Transcend's solution: "We actually have an auditor program that goes in and just looks at websites to see if those pixels are actually firing."

She highlights the implications for regulatory compliance: "We already have a view of what a company will get fined, and regulators know this too. This stuff is now discoverable, this stuff is now obvious in many capacities."

Kate notes how this represents a fundamental shift in regulatory exposure: "Businesses are now starting to wake up to this notion that it's time to get this stuff handled. For many of those companies, they're realizing, 'Okay, we're not going to be able to just escape because we're not Google.' The regulators are like, 'Okay, we've signaled to Google and Facebook these huge fines. Now we're going to make it clear to the rest of the industry.'"

Timestamp: [28:42-29:56] Youtube Icon

💎 Key Insights

  • The majority of startups today have "some AI native component," showing how pervasive AI has become in the startup ecosystem
  • AI companies face a pricing paradox—they sell efficiency and reduce headcount, undermining traditional per-seat SaaS pricing models
  • AI is forcing a return to "first principles" for company building, requiring new approaches to pricing, go-to-market, hiring, and fundraising
  • Privacy, security, and governance have evolved from marketing points to "first class principles" that must be built into products from the ground up
  • Contrary to common perception, AI is already regulated through existing frameworks like GDPR and state privacy laws, which address automated decision-making
  • Europe is developing a risk-based AI regulatory framework that will categorize AI applications on a sliding scale from banned to lightly regulated
  • Privacy enforcement has expanded beyond "big tech" targets, with significant fines ($12M example) now hitting companies across various industries
  • The technical ability to monitor tracking pixel compliance means regulators can now easily identify and penalize consent violations

Timestamp: [20:04-29:56] Youtube Icon

📚 References

Regulatory Bodies:

  • FTC (Federal Trade Commission) - U.S. regulatory body Kate mentions as having authority over AI-related deceptive practices
  • FCC (Federal Communications Commission) - U.S. regulatory body mentioned alongside FTC for AI oversight

Regulations:

  • GDPR - European privacy regulation cited as including provisions for automated decision-making
  • "13 state comprehensive privacy regulations" - Kate's reference to the existing U.S. state-level privacy laws
  • European AI Act (proposed) - Risk-based regulatory framework Kate describes as being developed

Technologies:

  • GitHub Copilot - AI coding assistant Vas mentions as delivering "25% or 30% efficiency gains" for engineering teams
  • Pixel tracking - Technology mentioned in the Chicago healthcare system fine example

Companies/Organizations:

  • Google - Mentioned as historically receiving large privacy fines
  • Facebook - Mentioned alongside Google as target of major privacy enforcement
  • Chicago healthcare system - Unnamed system fined $12 million for tracking pixel violations

Concepts:

  • "Paper tiger" - Kate's description of how privacy regulation was historically perceived
  • Risk-based regulation - Approach to AI governance Kate describes Europe pursuing
  • Automated decision-making - Technical term for certain AI applications already regulated under GDPR
  • "Jumped the shark" - Vas's phrase describing how personal data security has become mainstream expectation

Timestamp: [20:04-29:56] Youtube Icon

🔄 Continuous Regulatory Evolution

Kate continues her thoughts on regulatory expansion, emphasizing that this is an ongoing process rather than a one-time compliance effort.

"They just keep tightening. They have made very clear the spirit of what they want, which is individuals in control of their data and data being handled appropriately within companies. And from every indication we're seeing in regulated markets, they just keep updating the regulations to get closer and closer to that end state."

She shares a frequent use case that exemplifies the challenge companies face: "When the governance team, whether that's the lawyer, the head of privacy, when they write a PRD and go to the engineering team and say, 'I think I need you to build something like this,' and those engineers say, 'I'm pretty sure that somebody has figured out a way to have this level of infrastructure,' and then they call us."

Kate explains why this scenario reveals a fundamental misunderstanding of data governance: "If a legal person is writing something in terms of a request that they have for the way that the product works, they are going to come back in 6 months and have another request because these laws keep coming, they keep changing."

"The amount of times we talk to folks who say, 'We tried to build this a year ago because we thought it was a discrete request that the legal team would then leave us alone.' And they realize that this is an ongoing screw tightening of data governance forever."

Timestamp: [30:03-31:30] Youtube Icon

🌐 The Global Data Rights Expansion

Kate highlights the expanding scope of data privacy rights globally, emphasizing why companies need a comprehensive solution rather than point fixes.

"We live in such an increasingly complex data world. This is not slowing down. We believe that in the next few years, a third of the global population will have data rights. This is coming."

She explains how Transcend's solution provides relief to engineering teams: "For those developers and the folks who raise their hand and kind of get Transcend early, it's really magical because then they're just like, 'Oh, this is all the product that I was being asked to handle anyway, whether it's the deletion scripts or the firewalls or making sure the data is appropriately classified.'"

Kate notes that engineers typically prefer to focus on core product development: "Engineers would rather be building such more core things than having to worry about data governance."

Vas reinforces this value proposition, describing Transcend's service as: "Imagine all the world's regulatory frameworks condensed into one piece of computational logic, powered by Transcend, delivered by an API so that regardless of where your end customers are and what laws govern their data, you are absolutely up to date because you can pull our API."

He emphasizes the ongoing support aspect: "We've already updated that logic. You know how to govern that data right in that moment, and oh by the way, if anything changes, we got your back."

Timestamp: [31:30-32:51] Youtube Icon

🤖 AI Adoption Within Transcend

Vas pivots the conversation to ask how Transcend itself is utilizing AI in its operations. Kate explains their enthusiastic but thoughtful approach to integrating AI across their business.

"We are adapters. I think in terms of culture, I talked to other tech companies, and they're either like, right now they're either in or out. We are in for sure. Culturally, we believe that this is going to have a huge impact on the way that we operate."

She describes their strategic approach: "We believe that there's efficiencies to be had. We believe that we can help create a better sort of culture of getting things done. We hunt for those monotonous tasks that nobody wants to do, and we figure out how AI can be applied to it."

Kate explains how AI adoption is company-wide: "It's all over the map, and every organization leader has been asked to think about how they incorporate AI into their work processes."

She then shares a specific example of an internal AI tool they've developed: "We've built a proprietary Privacy GPT, which basically summarizes all of the privacy regulations that exist in the world. These things are incredibly complex. I do not have a law degree, so I really need help kind of thinking this through."

Kate highlights how this tool democratizes expertise across the organization: "Arming every single person at the company with this tool means that our content strategist and our outbound business development team have the same access to information as our head of privacy as it relates to the regulations and understanding the field. And that's been a game changer for us just in terms of education."

Timestamp: [32:51-35:00] Youtube Icon

🔎 AI as Information Access

Vas builds on Kate's example of Privacy GPT, observing how one of the most powerful applications of generative AI is improving information access and retrieval.

"I think one of the best applications of LLMs or Gen AI has been around just simple information recall. Why are people likening companies like OpenAI and Anthropic to Google? It's because so many of the use cases right now are just about search. It's information recall, it's how do I get the right piece of information at my fingertips in a very consumable way."

He contrasts this with traditional approaches to accessing complex information: "I'm not digging through, in the case of Privacy GPT, I'm not digging through all of this legal-ese and these 200 pages of regulatory frameworks that are being published out on these random websites. I'm just getting the right piece of information at the right time."

Vas highlights the practical benefits for different teams: "I can imagine all of our sales reps and our SDRs and our content strategists, they're getting so much leverage from that. They're eliminating all the craft of having to wade through the density of data, and they're actually just getting to the answer."

He notes an additional benefit beyond just information retrieval: "Maybe even sometimes they're getting the content structured in a way that they can just rinse and repeat and almost inject it right into a piece of collateral for us."

Timestamp: [35:00-36:43] Youtube Icon

📈 Future Impact of AI on Personal Data

Vas asks Kate what excites her most about AI's future impact. Her response focuses on how AI will elevate the importance of personal data governance in business operations.

"I think the thing that I'm most excited about is the impact that AI is going to have on the importance of personal data and the way that businesses are using personal data to fuel their operations, to service their customers, to make sure that they are providing a very valuable experience out in the world."

She connects this back to Transcend's mission: "When I go back to the mission of Transcend, it's all about making it easy for businesses to do that, to manage their personal data and to provide end users with great experiences, to meet the spirit and the letter of the regulations that exist."

Kate uses a wave metaphor to describe the current moment: "This is just sort of the beginning of the crest of the wave as we think about the impact of data, the importance of data, and the underpinnings of the way that this is going to be continued to be regulated."

She concludes with an insight about the balance between innovation and governance: "We're thrilled and excited to watch companies kind of take that turn and realize that they want to really get their stuff right because they recognize that there's only so much value that they can have in going fast if they don't have the right brakes. It's totally useless."

Timestamp: [36:43-38:27] Youtube Icon

🚀 Building With Guardrails From Day One

Vas agrees with Kate's perspective, emphasizing the advantage of implementing data governance from the beginning rather than retrofitting it later.

"When I think about you guys and the position that you're in, we get a chance to go to customers and say, 'Put the brakes in place now, make this a foundational part of your application stack, and then run wild.'"

He contrasts this proactive approach with the alternative: "You're going to get to implement so many different cool things knowing in the background that you already have your safeguards in place. To layer that on after the fact, I think in some ways is going to stifle innovation."

Kate strongly agrees and extends the point to competitive advantage: "I think it's going to hurt your competitiveness in the market. I think we are moving towards an environment where if you don't know where your clean and consented non-personal data is, that's going to hurt your ability to compete against other companies that have that in place."

She summarizes the emerging paradigm with a simple but powerful statement: "We are moving towards that world. Data is just getting more and more important."

Timestamp: [38:27-39:15] Youtube Icon

👋 Closing Remarks

Vas concludes the conversation by thanking Kate and summarizing how their discussion complements the season's focus on AI.

"We've been talking all season about the amazing force multiplier that AI is. We haven't yet really talked about until today just what it means to implement AI in a safe and well-governed manner, and you've shed such a great light on what that means from a brass tacks standpoint."

He acknowledges Transcend's role in enabling responsible AI adoption: "I think importantly, how Transcend is really going to power that future. So excited for you guys, the big opportunity ahead."

Vas also notes Transcend's business momentum: "We're already seeing it in the business. I mean, this past quarter and I think the next couple quarters are going to be some of the most exciting in our company's history."

Kate responds with appreciation: "Thank you, we appreciate it."

Timestamp: [39:15-40:04] Youtube Icon

💎 Key Insights

  • Data governance is not a one-time project but "an ongoing screw tightening...forever" as regulations continuously evolve
  • Within a few years, "a third of the global population will have data rights," creating massive governance challenges for companies
  • Transcend essentially packages "all the world's regulatory frameworks condensed into one piece of computational logic" accessible via API
  • AI adoption varies widely among tech companies—some are "in" while others are "out," with Transcend fully embracing it across operations
  • Democratization of specialized knowledge is a powerful AI use case, allowing sales and marketing teams to access the same regulatory insights as privacy experts
  • Current AI applications primarily focus on information retrieval—getting "the right piece of information at the right time" without wading through complexity
  • Companies that proactively implement data governance will gain competitive advantage as "clean and consented non-personal data" becomes business-critical
  • The relationship between speed and safety in innovation is critical: "There's only so much value in going fast if you don't have the right brakes"

Timestamp: [30:03-40:04] Youtube Icon

📚 References

Companies/Organizations:

  • Transcend - Data governance company discussed throughout, providing API-based privacy infrastructure
  • OpenAI - AI company mentioned by Vas when discussing information retrieval applications
  • Anthropic - AI company mentioned alongside OpenAI in the search/retrieval context
  • Google - Referenced as comparable to new AI companies in their information retrieval function

Technologies/Products:

  • Privacy GPT - Transcend's proprietary AI tool that summarizes global privacy regulations
  • LLMs (Large Language Models) - Referenced by Vas when discussing information recall applications
  • Gen AI - Term used alongside LLMs to describe generative artificial intelligence applications
  • Copilot - Developer tool Kate mentions Transcend uses internally

Concepts:

  • PRD (Product Requirements Document) - Document created by legal/privacy teams requesting governance features
  • "Screw tightening" - Kate's metaphor for the ongoing process of regulatory compliance
  • "Clean and consented non-personal data" - Term for properly governed data that will be competitively advantageous
  • "The crest of the wave" - Kate's metaphor for the current moment in data governance evolution
  • Middleware - Referenced as Transcend's technology that monitors inputs and outputs to their Privacy GPT

Roles:

  • Content strategist - Role mentioned as benefiting from Privacy GPT's knowledge democratization
  • SDRs (Sales Development Representatives) - Sales role mentioned as leveraging AI for information access
  • Head of privacy - Executive role referenced as traditionally having privileged access to regulatory knowledge

Timestamp: [30:03-40:04] Youtube Icon