Welcome back to Cautious Optimism. Today is June 4th, 2024. I’m still at an event, so today’s missive won’t get to all the topics I wanted to. But! That just means more for later. Hugs, let’s have some fun! — Alex
Trending Up: Intel’s contra-Nvidia chatter … crypto lobbying … AI as geopolitics … AI + physical therapy … Pride — cheers to all our LGBTQ friends and family around the world … male birth control … Shutterstock’s 2023 revenue …
Trending Down: Tesla’s GPU future … tech jobs in particular … the labor market, more generally … states rights (to sue oil companies) … bigots watching Ms. Rachel … The Epoch Times epic time laundering money …
Startup of the Day: Storyblok is raising hella funds for its headless CMS, and Ingrid Lunden reports that the Austrian startup could be profitable next year.
AI folks are worried about AI folks
My view that regulation should chase AI progress instead of working to get ahead of it is not very popular. AI worriers want it the other way around, and zero-brakes accelerationists consider any government meddling to be tantamount to murder. (Not kidding on that last one.)
The good thing is that my view carries as much weight as a straw. So, what do AI folks say about the matter? A new open letter from former and current OpenAI, DeepMind, and Anthropic denizens makes it clear that they are worried that the race to build more powerful AI models more quickly has risks that are not being properly addressed.
The gist of their demands is that when it comes to risk, former and current employees should not be penalized for speaking up regarding potential dangers. Or, for a tonal sample:
[We demand that] the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit
That sounds pretty reasonable to me.
Power, water, and who gets to calculate the future
I recently caught a short presentation on the expected future power needs for data centers in the AI era. The gist was that the amount of electricity we need to scale our species-level compute to required levels will push aging grids to their limit, and that we need lots more total net power generation.
Building to meet that demand is not as simple as you might hope (good luck building high-capacity lines at a quick tempo). And as developed economies move away from highly polluting power generation, finding new inputs is tricky. Hence, Microsoft and its peers are looking into creating more total juice on their own.
But power demands are not the only input to worry about when it comes to our AI-riven future. Water, too, is a big deal (more here if you need a refresher). This creates an interesting situation:
Q: Who will calculate the future of AI? The countries that have the best business climate and most companies pushing the frontier of AI models and chip efficiency?
A: No. The countries with the most low-carbon, high-consistency power generation, and excess water resources will do the math.
Kinda. We’re being too cute by half, but something to watch as data center buildouts around the world accelerate and everyone scrambles not to be GPU-poor is where we can sustainably plop data centers without wrecking groundwater reserves, running out of power or hammering the atmosphere with more carbon emissions.
Other tidbits, questions, and notes:
Why is everyone obsessed with AI agents? Is it because generative AI inside the enterprise is not catching on as hoped, meaning a different conveyance mechanism is needed?
More tomorrow and lots lots more when I get back to the Home Office!