A fantastic piece by Paul Kedrosky on how California's Atmospheric River is changing and the implications of these changes.
Lots of great stuff in it, but this was totally new to me:
A predator-prey model is a mathematical representation of the interactions between two species: a predator and prey. It is often modeled as wolves, sheep, and grass. The most common model is the Lotka-Volterra, which consists of two differential equations. There are two stable equilibria: one with predator and prey in approximate balance, and one with both extinct
Lattice made a splash this weekwith a pretty crazy announcement about adding AI workers to their platform. It was shambolic and they’ve since walked it back.
While Lattice did this poorly, I think that the question of “how do we integrate Agents into the world?” is an interesting place to dig and experiment right now.
As an example, if you believe in agents, it seems pretty clear that agents are going to need to be able to pay for things subject to certain rules. So... what does it look like to give an agent a credit card?
I could just give it a credit card in my name, but that seems a little risky, and if things go wrong, who’s going to make that right?
But if I hire an agent created by another company to do work for my company, who gives the credit card to them? Is it the creating company? Do they then invoice me after the fact?
It's possible that this looks exactly like how businesses give workers credit cards... but maybe not? It might be better to know that this is the card assigned to system X by entity Y. The entity that is ultimately on the hook for the spending even if things go wrong might want to be able to track that; the credit card issuer might also want to know which of its clients are giving Agents these abilities as the patterns of spending, real and fraudulent, might look different. This transparency probably helps the system overall.
Another example is account creation. There are probably types of services where we want non-human actors to be able to create an account. We could have them pretend to be human, but it might help to let them ask for agent access to a service. This is probably different from API access; in some cases, it probably helps for them to see exactly what I see in the system.
Zooming out a bit, it seems to me that people get really upset when something pretends to be a human but it is actually AI. It also seems likely that we’re going to want to give agents more ability to act in the world and be productive. Yet the systems we have today that are essential for productive work assume human actors or computers acting on behalf of humans (programmatic access), but nothing in between. If we’re going to capture the value from agents, our systems are going to have to adapt.
When I was growing up, I never used first names with adults. The adults in my life were "Mr. Knabe", "Mrs. Stanley", or "Dr. Woods".
Adults reinforced this norm as well. When I met my parents friends, they introduced themselves — in a friendly way — as "Mr. Brinker" rather than Chris. The same with teachers — I had "Mrs. Bryson", not "Deborah".
My parents would’ve corrected me had I tried something else. I’m sure they probably did at some point, but I don’t remember it happening. It wasn’t notable, it’s how the world was. In lots of cases, I'm not even sure I knew the first names of my parents friends until I graduated from college and then someone like Mr. Hehn would say, "please, call me Gunther" in a way that communicated I was now an adult too. This made me feel proud. The only exceptions I can think of here are my Pastors (Jerry) and family (Aunt Julie and Uncle Bert).
As far as I can tell, this has completely gone out of fashion.
With my kids, 2 and 4, no adult uses their last name. My friends introduce themselves as Mr. Jon and Ms. Veronica, not Mr. and Mrs. Flash. I do this too — I introduce my friends to them as Mr. Graham and Mr. Ted not Mr. Rowe and Mr. Strong. Even my daughter’s teacher is Ms. Heather not Ms. Jones. I assume this will change as they enter the formal school system… but who knows!
This new behavior is so consistent that if an adult that I knew well introduced themselves to my child as Mr. Banna instead of Mr. Rami, it would seem overly formal, like wearing a tuxedo to an office.
This doesn’t bother me on a moral level but I am intensely curious about it. When did it change? Why? I assume it’s related to the broader decline of formality in our culture, the way that the hoodie has replaced the sports coat for menswear.
But what is driving this? Is it a desire to be youthful? Relatable? A way of communicating that adults and children are on the same level? As we’ve made this switch, what have we given up? Anything? Nothing? Does this change how children perceive adults? Does it change how children perceive themselves?
The most persuasive part of his argument to me is the relationship between compute and intelligence. This is sort of like the New England Patriots to me; I'm going to believe in it until it stops working. I see reasons why it might stop (run out of data, limited by energy / computing power available), but I don't know when or if we'll actually hit those constraints. People are pretty good at avoiding constraints!
I think he underrates the likelihood of a bottleneck somewhere that keeps us from getting to the AGI he imagines. Any individual bottleneck might be unlikely, but as long as one exists, the entire system is constrained.
Something I see Leopold do at points is assume a super AI, in his case, an automated AI researcher that is 100x as competent as today's top AI researcher. With this assumed, any AI research problem is solvable because you can scale up infinite 100x AI researchers to get around the problem. Once any AI research problem is solvable, then any problem is solvable.
What I think will ultimately happen is something like this:
An AI will exist that is super human on many dimensions. It will be able to do many things way better than humans and will be inarguably smarter than most humans. [0] Most of todays knowledge work will be offloaded to the AIs. This will be similar to the way that a lot of the production work of 1750 has been moved to machines in factories.
That AI will also have limitations. There will be some things that it can't do as well as humans or where humans will have the ability to reliably trip it up, despite it's intelligence. To extend the factory analogy, you'll still have humans pressing buttons for reasons other than just keeping the humans in control.
This will be really destabilizing. Society is going to change more between 2020 and 2040 than it did between 1950 and 2020.
Somewhat off topic: earlier this year, I read Meet You in Hell, which is the story of Henry Clay Frick and Andrew Carnegie. The dynamics of that era, with the railroad leading to a spike in demand for steel and steel leading to a spike in demand for coke were very recognizable in today's AI race.
[0]: I think GPT-4 is already this! Do you know a single person who knows as much stuff about as many things as it does? I don't. And yet it still has limitations!