Can I borrow your life's work? It's a matter of national security!
Welcome to Cautious Optimism, a newsletter on tech, business, and power.
📈 Trending Up: Cautious optimism … tariff durability … Chinese spying … antitrust? … the German government … crowdfunding … Browser Use … partisan censorship … startup ARR …
Taiwanese concern about Chinese meddling: “Taiwan’s new president has formally labelled China a “foreign hostile force” and ramped up national security measures in the face of growing threats and a string of spying cases,” The Guardian reports.
📉 Trending Down: The UK economy … Walmart-China relations … NATO relations … Meta’s belief in free speech … the price of agentic intelligence … going it alone …
Here’s a quote: “There was universal revulsion against the Trump economic policies.” “Enter, now, the tax cheats.” “House Democrats are so infuriated with Schumer’s decision that some have begun encouraging her to run [AOC] against Schumer in a primary.”
What did we just say?
Yesterday CO wrote that breakout startups today are hitting vertical revenue curves faster than before, potentially improving their ability to self-fund and reducing the need for external capital. For VCs accustomed to teams building for years on their dime before generating material top line, it’s a change.
It’s also real trend. TechCrunch reports the following concerning Y Combinator’s recent demo day:
Stories abound of AI startups quickly reaching tens of millions in revenue with headcount as low as 20 people. With less overhead, some startups may be inspired to take less venture capital funding, especially at the earliest stages.
Terrence Rohan, an investor with Otherwise Fund who’s been investing in Y Combinator since 2010, says he’s noticing a “vibe shift” from some founders in the current batch of the famed accelerator.
Yep. TechCrunch reports that some experienced founders argue against raising less money over concern about ceding a possible edge to competitors. It’s a valid point, but if you are growing so fast you don’t have time to go out and raise, will you?
All About Copyright
OpenAI sent the White House Office of Science and Technology its recommendations for the upcoming US AI Action Plan. Much of what OpenAI penned you’ve seen before — the paper echoes Sam’s notes on AI scaling that we’ve covered, for example.
But there are a few quotes that are worth reading in their entirety, even if you have already seen the headlines.
On China, the CCP, and DeepSeek:
In advancing democratic AI, America is competing with a CCP determined to become the global leader by 2030. That’s why the recent release of DeepSeek’s R1 model is so noteworthy—not because of its capabilities (R1’s reasoning capabilities, albeit impressive, are at best on par with several US models), but as a gauge of the state of this competition.
As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm. And because DeepSeek is simultaneously state-subsidized, state-controlled, and freely available, the cost to its users is their privacy and security, as DeepSeek faces requirements under Chinese law[.]
The framing here national competition. It’s a familiar argument. China was ahead in AI, then behind, then caught up again — for two nations currently striving for global leadership, the AI race is a zero-fail situation. Hence why OpenAI thinks that we need to set domestic copyright aside to prevent putative self-defeat (emphasis added)
Today, CCP-controlled China has a number of strategic advantages, including: Its ability to benefit from copyright arbitrage being created by democratic nations that do not clearly protect AI training by statute, like the US, or that reduce the amount of training data through an opt-out regime for copyright holders, like the EU. The PRC is unlikely to respect the IP regimes of any of such nations for the training of its AI systems, but already likely has access to all the same data, putting American AI labs at a comparative disadvantage while gaining little in the way of protections for the original IP creators
Therefore, OpenAI argues for (emphasis added):
A copyright strategy that promotes the freedom to learn: America’s robust, balanced intellectual property system has long been key to our global leadership on innovation. We propose a copyright strategy that would extend the system’s role into the Intelligence Age by protecting the rights and interests of content creators while also protecting America’s AI leadership and national security. The federal government can both secure Americans’ freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models’ ability to learn from copyrighted material.
What would that copyright strategy look like? Not what Europe and the UK are up to:
America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development. In other markets, rigid copyright rules are repressing innovation and investment.
The European Union, for one, has created “text and data mining exceptions” with broadly applicable “opt-outs” for any rights holder—meaning access to important AI inputs is less predictable and likely to become more difficult as the EU’s regulations take shape. Unpredictable availability of inputs hinders AI innovation, particularly for smaller, newer entrants with limited budgets.
Therefore:
Applying the fair use doctrine to AI is not only a matter of American competitiveness—it’s a matter of national security.
Put another way: Let US AI companies use copyright-protected materials in a manner that many rights holders disagree with or the United States will lose to China in the AI race. It’s a compelling argument from the perspective of technologists; who doesn’t want to not pay for inputs?
It’s also potentially a self-defeating strategy. TollBit, a company that I’m tracking as closely as possible, wants to help rights holders license their content to AI companies. It recently dropped research noting that AI companies are scraping more, more frequently, and not sending traffic back to sources.
TollBit reported in its State of the Bots for Q4 2024 (released in February) that “AI chat bots on average drive referral traffic at a rate that is 96% lower than traditional Google search.” Ouch.
Sam’s argument simple: Access to copyright-protected materials is so important that it’s a matter of national security, and therefore viability. But if we have to collapse the economics of creating content for AI companies to survive today, who is going to create the material that the same AI companies need to eat tomorrow? I believe this is the technology version of eating your seed corn.
The OpenAI crew have a point. It will be a little bit harder to make the economics of AI work if training materials are properly compensated. I simply do not buy the argument that either we scupper copyright protections or we cede the future to China. I love a muscular approach to building; I do not appreciate tech companies saying that their IP is sacrosanct, but the IP of others is little more than a trifle.
I am tempted to change the name of this blog to The Super Official OpenAI Blog replete with the AI company’s logo and design language to underscore the point that tech companies want some IP kept secure.
Sidebar: There’s a final wrinkle in the OpenAI document that we need to discuss. The company said that in the EU, “access to important AI inputs is less predictable and likely to become more difficult as the EU’s regulations take shape.”
The company also wrote later in its copyright-related notes that the US government should “shap[e] international policy discussions around copyright and AI, and working to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress.”
That sounds like OpenAI demanding that the government demand that other nations not have their own copyright policies in place. Or, that the United States should determine EU copyright policy. Sure, Jan.
Friday Odds & Ends
UiPath’s transition from the RPA era to the AI era is still a struggle. The company’s stock got hammered this week after earnings. Transcript here, but part of the growth problems the company has are predicated on selling to the government. And macro uncertainty. The latter no company can avoid. The former indicates that we should handicap any growth from gov-facing companies that once did well selling their digital wares to Uncle Sam.
CO is very bullish on stablecoins for practical reasons, and we’re hype about them from a national-security perspective. In simple terms, stablecoins offer access to more winsome currencies for folks living in nations that lack something akin to the dollar or euro. And given USD stablecoin dominance and the fact that stablecoins backers love to buy US government debt, USD stables boost national fiscal stability. It’s all very neat.
And accelerating. MoonPay just bought Iron for nine figures. What does the company do? Offer “API-focused stablecoin infrastructure,” The Block reports. Yep. Expect more of this in 2025.
From the Adobe earnings call, a brutal question:
[I]f you took the AI book of business, it would be low single-digit percent of your total revenue for the year. I guess many are asking, when does this become more material? How long does it take? What is required? Maybe just help us walk through the AI journey and how that translates to revenue.”
Techmeme highlighted a trio of FT reports on UK startup accelerators, EU startup accelerators, and European startup hubs.
Supabase is raising ~$100M at a ~$2B valuation. I’ve used Supabase thanks to Lovable, but you love to see open-source tech doing well, yeah?
And that’s all the time we have this week. More soon! — Alex