Hey people! Welcome back to Forests Over Trees, your weekly tech strategy newsletter. It’s time to zoom-out, connect dots, and (try to) predict the future.
Memory is a Crappy Moat
Back in February, OpenAI announced a new feature for paying ChatGPT customers. If you so choose, you can let them use your past queries (and the things they learned about you) to make the answers to your future queries better. They call it “memory”, and it helps them personalize the model to make it just the way you like it.
So let’s say, for example, you’ve been asking ChatGPT to help summarize text for you. And let’s also say, hypothetically, that you’re very annoyingly particular about the format you want your summaries in. With this memory feature, ChatGPT doesn’t need you to constantly remind it to re-format to your annoying standards!
And while it does sound like a small win on convenience, analysts have been treating memory as a huge win strategically and financially.
Basically, they think it’s finally a meaningful moat for OpenAI (or any model company that does something similar).
I disagree. But before we get any further, let’s level-set on something… do these incredibly innovative companies really not have any moats?
Basically, yes.
Taking inventory of moats they’ve tried
Make the most carefully designed, well-architected models – companies tried to have superior algorithms. It didn’t last. In the end, those algorithms got replicated, and the thing that really mattered – performance, leveled out.
Make the biggest, fastest, most expensive models – while first movers in the LLM space looked like Tesla’s racing against turtles, those performance gains have diminishing returns. Plus, Meta’s open-source Llama models gave small competitors the chance to build from the shoulders of a giant.
Make the sexiest, most hip model company – companies tried to build sticky habits among users and strong loyalty among employees. But the early adopting users are trying every model they can get their hands on and picking whatever feels fastest. The war for talent is having a similar effect, with employees being constantly tempted by dizzyingly high salaries.
So we get into a cycle where any model maker that is “behind” overpays for talent, boosts their model performance, starts beating competitors, and brings in users. Then it repeats with whoever is now “behind”.
When an industry has no moats, it’s basically a commodity, where the products are the same and price is all that matters. And that’s where many think the AI model companies are today.
So… why is everyone hot and bothered about memory?
The new moat – personalization
Let’s assume that you’re using Company A’s model with memory.
Within the first few days, you don’t notice much difference. But then, after a week, you realize that you’re not correcting or following-up on queries to clarify them. Things are right the first time, more often. After a month, you’ve stopped being impressed when your queries come back with perfect context, quoting your past conversations with the model and giving you better and better advice.
Out of nowhere, Company B puts out a brand-new model, releases 100 dimly lit demo videos that go viral (cough, OpenAI), and you jump to it immediately… but it doesn’t deliver answers the way you like. You’re spending time “coaching”. It feels like you’re starting from scratch. So you jump back to A and stay there. Personalization has basically raised the switching costs.
The folks who are bullish on personalization argue that this is just the tip of the iceberg. With this moat in place, they think users will be willing to pay more, more user generated data will pour in to help train the models, etc.
It’s a reasonable argument, but I don’t buy it.
Bigger forces at work
If we’re only thinking about the models competing against each other, that’s too narrow a view. If we zoom-out, there are three forces pushing back, making it harder for model companies to use personalization as a moat.
1/ User Privacy
People are getting more sensitive about who has their data, and why. Per KPMG, 86% of people in the US said data privacy is a growing concern. The largest messaging apps in the world (iMessage, Messenger, WhatsApp, and upstart Signal) all have end-to-end encryption. While early adopters might care less and sacrifice privacy for convenience (and better AI), most consumers likely won’t.
2/ Data Portability
There’s a wave of regulation pushing companies to let users own and take their data wherever they want. I’ve written before about GDPR (the EU privacy bill), so let me borrow from that:
[GDPR] requires that social media companies give users tools to download and take their personal data with them, either for re-posting to other platforms or for archiving.
Tech firms who run social media sites are understandably concerned about that… because they get value from the users and the content they have on-site. The users, data, and content are moats.
They don’t want you to leave!
There will likely be some technical constraints to this (and US regulation is always slower than the EU), but to me it means any moat derived from lack of portability is short-lived.
3/ The Elephant(s) in the Room
Privacy concerns aside for a moment… If you really want maximum personalization, a memory of past queries is not enough. You’ll want the AI to see your grocery list, your browsing history, your social media posts, your email, etc.
To have that, you can’t just be a model maker. You need to be one of the tech behemoths that already has one or more of those data sources. Because if model performance hinges on “who knows you better”, then the model-only companies are bringing a knife to a gun fight.
Bonus Bullets
Quote of the Week
Anyone can replicate a product. There are lots of brilliant minds out there that know how to code, but there’s unique DNA to a brand. You cannot have a brand without people. That is the most important asset you will ever have.
— Whitney Wolfe Herd, Founder & Executive Chair at Bumble
Quick News Reactions
Ad-in– Netflix ad-supported tier subscribers are up from 23M to 40M since January, so they’re continuing to make the ads naysayers look silly. I wrote over a year ago about their strategic response to being between a rock and a hard place, and I’m still obsessed with the idea of them getting more seriously into sports, but this ads momentum is awesome.
Uber jazz hands – After announcing earnings losses last week, Uber offers some shiny new things! This time, it’s Costco delivery via UberEats, and shuttle service to/from concerts, airports, and other major destinations. I’ll admit, I’m successfully distracted…
More chips – Korea wants in on the chip war, and they’re putting up $7.3B to make it happen. I just wrote about all Japan’s chip investments a few weeks ago, and it makes perfect sense for Korea to be making similar moves. But Japan is putting up $23B… just saying.