Mutuals
Numinous Rationality
Perry Metzger
0:00
-3:01:32

Perry Metzger

Radical technological change, why Utilitarians are kidding themselves, and navigating the acute risk period through Von Neummann probes to the stars

As I’ve said often in the past, utilitarianism is a hell of a drug. And it can get you to do incredibly horrible things while you’re high on it. Utilitarianism not even once, just say no to utilitarian.


All of these discussions are old discussions. Instead of being had among 200 people, they are being had in public among vast numbers.


Perry Metzger writes on Substack at Diminished Capacity and on Twitter.

  • [00:02:00] Jupiter Brains and Extropians

  • [00:07:00] Life Extension

  • [00:11:00] Ethical Systems

  • [00:19:00] Rejection of Utilitarianism

  • [00:26:00] Anarcho-Capitalism

  • [00:30:00] SVB Collapse

  • [00:40:00] Finanical System Structuring

  • [00:54:00] AI

  • [01:00:00] The history of futurist discussion of AI

  • [01:03:00] AI Safety

  • [01:07:00] Fishing in the sea of AI Minds

  • [01:17:00] Mask on the Shoggoth

  • [01:22:00] Future Shock

  • [01:25:00] Competition in FDA Regulation

  • [01:38:00] Kardashev Scale Civilizations

  • [01:42:00] Estimates of X-Risk

  • [01:52:00] Cultural Effects of AI Developments

  • [01:56:00] Nanotechnology

  • [02:15:00] Sociology of the Nanotechnology field

  • [02:27:00] Engineering vs Theorizing

  • [02:43:00] Back to the future; accuracy of early extropian discussion

  • [02:52:00] Grabby Aliens

  • [02:56:00] Astrophysics


This transcript was machine generated and contains errors.

Perry Metzger: [00:00:00] to try to make our lives better, which is, I'm gonna quit my Firefox instance with 8,500 tabs in it.

Ben Goldhaber: Can't hurt. And I do think that the recording is working now. Cool. 

Divia Eden: Yeah. Okay. So, and do you wanna introduce Perry or should I?

Ben Goldhaber: No, Divia. I think why don't you do the introduction. 

Perry Metzger: Okay. 

Divia Eden: Perry Metzger is a computer scientist who's done academic research, who's also had a bunch of different programming jobs, including startups and consulting. He knows more than anyone else I talked to about nanotech.

I originally met Perry through my husband, who met him through an ancap meetup (Anarcho-Capitalism). And from my perspective, he's one of those guys who's been around the futurist scene forever. He was an early member of the Cypherpunks mailing list, started the original cryptography list, and started the Extropians mailing list.

He says that he claims no ownership or originality of any transhumanist ideas, except that he did coin the term Jupiter Brain. So, Perry thanks so much for coming on our podcast. 

Perry Metzger: Hello. Actually no one hires me to write software [00:01:00] anymore. People, people hire me to be a horrifying management consultant.

Or what have you. Secretly I still write software here and there, and it horrifies people really, really badly. I have scarred a number of people working for me by having to deal with my software. They, you know, we, but we don't talk about that mostly, you know, and I've, I've gotten rid of most of the bodies over the years successfully.

So provided, no one finds them. We should be okay. Yes. Yeah I’ve done all sorts of stuff. I have no idea actually what I'm going to be when I grow up, but I've been told that if you don't actually figure that out by the time you're 60 or 70, you don't have to grow up.

So I might actually just have to opt for that. 

Divia Eden: Sounds good to me. Well, we have a lot of questions, but I was wondering if you could first start by telling our listeners what a Jupiter Brain is, in case they don't already know. 

Perry Metzger: Oh, okay. So, I should set a little context, which is for those that, [00:02:00] have no idea what Divia was mentioning when she said, I started the Extropians mailing list about 723,000 years ago.

In, you know, slightly before Homo Sapien showed up I was hanging out with a friend of mine. And I'll cut the story really short by saying that we discovered this zine, this was back in the era when people would decide, you know, they'd get access to a photocopier and they'll start publishing a magazine.

There was this Zine called Extropy: Vaccine for Future Shock put out by a gentleman who was then Max O'Connor, but is now Max Moore. And, you know, Harry and I were reading this thing and it looked like, oh, you know, these guys came down on the same spaceship as us. And I got in touch with Max and I said, Hey, I'd like to set up a mailing list to talk about these ideas.

And the ideas like the Magazine was roughly centered on anarcho-capitalism, radical life extension, transhumanism, uploading artificial intelligence you know neurotropic drugs, the usual stuff that young people were interested in, back then. So, I got in touch with Max and I said, Hey, max, I want to set up a mailing list for subscribers of this thing.

And he said, well, what's that? And I said, well, you know why I set up this thing and a bunch of people then can have an online conversation. And he said, okay. And I sent out invites on something that was called Libernet back then, which was for Crazy Libertarians and the Cryonics mailing list for crazy people who want to freeze their heads to save their asses.

And there's

Divia Eden: some crazy people on this call. Many of us are such people.

Perry Metzger: And I put out a call on a few Usenet news groups for those that don't remember Usenet, it's okay. You know, you don't want to hear how grandpa used to have to walk both ways up the hill to go to school anyway.

And the next thing you know, we have a few hundred people, many of whom become pretty famous, you know, like arguing about all of these topics. And at one point, we were talking about what the future, what the post-human future would look like.

And I did a back of the envelope calculation and I said, well, you know, the largest practical computer I could imagine making would be something like the size of Jupiter, you know, so if youhad an ai, the size of Jupiter, what's roughly the ratio between its cognition levels and the cognition levels of the average human being?

This looks pretty dismal. Right. You know, it's a lot worse than sense. It's a lot worse than the ratio between human beings and ants. And I haven't done the calculation in a while, so, you know, I invite people to go off and figure it out. And so this is where the Jupiter brain meme came in.

My buddy [00:05:00] Harry being, you can figure out what he's like from his initial comment was, well, if your brain's the size of Jupiter, how large is your penis? But anyway the jupiter brain meme died very early on because a bunch of people figured out that the cooling problems of having a computer that large would be bad. And, and the design that all the people who want to build a kishev you know, type two civilization these days are interested in, is the so-called Matryoshka brain, where you take all of the matter in the solar system and turn it into concentric shells of, of, of, you know, photovoltaic and computronium and have them communicate with each other and fly in swarms around the sun.

And of course, completely blot out the sun because why would you want to let any of that precious solar radiation escape when you could use it for computation? By the way, if you ever note there was a tweet that I think I forwarded a while ago, which said, you know, something to the effect of, I'm a conservative, I think that we should leave a hole [00:06:00] in the matryoshka swarm so that sunlight can get to the earth.

Right. You know, and then there are, you know, the more radical types who think that we should just disassemble the earth, because why would you leave all of that precious material being mostly wasted, you know? But nevermind that. So, yeah, I believe that I was the first person to coin the Jupiter brain meme, but it's a dead meme, so who cares?

Is this around obsolete technology? This was the very early 1990s, early nineties. Yeah, I still have somewhere the original invite for people to join the extropians mailing list. I can probably even find the date because thanks to text to the modern world, I can, no.

I would've thought that just doing a spotlight search on my desktop would find it pretty easily, but it didn't. But yeah, [00:07:00] I think that the, I think it was like 33 years ago, maybe 32 years ago. Got it. You know, which should tell you that I'm an old fart.

You know, and my interest in life extension technology is only increased as my body has started disintegrating around me. But you know, it's still here. So I'm not as horribly decrepit as I could be.

Ben Goldhaber: Are there any practices that you personally are interested in or doing around the life extension and longevity? Or is it more focused on - 

Perry Metzger: future oriented like crime? I, I follow a vegan diet, mostly to keep my cholesterol levels down and reduce my risk of things like colon cancer and what have you. And I try exercising, but that's gonna give me a few years at most. Right. And if you want to live to be 20,000 or, or you want to upload and become, you know, a general surface vehicle or something like that, possibly

Ben Goldhaber: you need more than the vegan diet.

Perry Metzger: The vegan diet, you're not, it's not going to help that much.

I mean, maybe, you know, that's probably buying me a few years on average, which is worthwhile. Right. You know, it would be so embarrassing to be the last person to die. Like, you know, like they've just about got the life extension tech and no, so close! You just missed it by a few hours. I'm so you gonna live long enough to live forever.

Divia Eden: You're going - 

Perry Metzger:  Yes. You did not live long enough to live forever. So, you know, by the way, there is no such thing as living forever. Right. You know, the heat, death of the universe is kind of inevitable. Absolutely. But a lot longer. Yeah, but I mean, you'd to be able to live to late state cab, late stage capitalism, and as we know, late stage capitalism will be when the last remnants of our civilization are hanging around black holes, tossing material into rob energy from their angular momentum with the Penrose process in order to keep things going.

That will be late stage capitalism, and that'll be in a few trillion years,

Ben Goldhaber: I suppose We'll have the markets for that. We'll have some prediction markets, on when the last piece of matter is gonna go out.

Perry Metzger: Well, you can, there's a wonderfully depressing Wikipedia page called timeline of the far future that I highly recommend and timeline of the far future includes things like this is leading up to the heat death of the universe.

It goes past the heat death of the universe. Oh, nice. Because, because one of the things that assuming we don't get a big rip, if we get the big rip, then like, who knows what happens? There's this question in cosmology right now because we've noticed that the acceleration that the expansion of the universe has been accelerating.

And the question is, will it continue accelerating? And if it continues accelerating, we might get to the point where like the individual atoms inside of us get torn apart. But if that isn't the case, you know, at some point, for example, even the largest black holes will decay from hawking radiation.

And, and if you read the timeline of the far future, it, it goes through all of that. But the, the [00:10:00] thing that I always mentioned when people talk to me about sustainability and long-term thinking is, well, you know, the earth in, in, in, you know, like only about 600 million years is you're not gonna be able to sustain you know, one of the two basic carbon fixation mechanisms of photosynthesis.

You know, and if you don't have a plan for that, you're not actually thinking long term. ,Right? I mean, that's, that's your, you're thinking short-term, long-term thinking is saying to yourself things like, well we have to send the Von Noman probes across, you know, the universe to star lift most of the hydrogen out of the stars so that we can conserve it for the far future.

Cuz right now it's not doing anyone any good, you know, it's just, it's just burning and sending out photons that dissipate and you can't do much with. 

Ben Goldhaber: I imagine in your mind, most of the people who think of themselves as long termist right now really don't deserve that title. 

Perry Metzger: Well, it depends. It depends on which long termist, I mean, some of the Bay Area rationalists are long termist. Yeah. But, you know, they, I have problems with some of them for all sorts of reasons. I, as I've said often in the past, utilitarianism is a hell of a drug. And it can get you to do incredibly horrible things while you're high on it. You know, utilitarianism not even once, just say no to utilitarian.

Divia Eden: Do you wanna tell us about what your ethical system is?

Perry Metzger: I guess so. So I have an internal conflict, right? So by the way, when I've used the term moral nihilists, I'm not speaking in terms of nihilism, colloquially, right? I mean like, as opposed to a moral realist? Yeah.

I don't go around walking around wearing black smoking clove cigarettes and, you know, I'm not like one of the characters in the Big Lebowski saying, she cut off her to they, and by the way,

Ben Goldhaber: Say what you will about utilitarianism at least it's an ethos.

Perry Metzger: There you go.

But when we say moral nihilism versus moral realism, there's the question of whether morals have some sort of objective reality. Whether there is such a thing as objective moral knowledge. So there are three, or there's so many levels here. I'm starting to sound like the Spanish Inquisition sketch from Monty Python.

There are four levels to my interest here. So taking a step back though, there's the question of how does one argue with people? And what I mean by that is that a lot of the time when one discusses morality with people online, someone says, you know, that there is a moral obligation to pay a living wage to Sure.

To restaurant workers. You, you know from my point of view at that point, you, you have expressed that you're a moral realist of some point and of some sort. And how have I concluded that? [00:13:00] Well, you're saying that there are moral facts, and based on these moral facts, we are all obliged to behave in a particular way.

And because we are obliged to behave in this way, those who, you know, who fail to behave in this way, you know, are, have, have done something wrong. And we must, you know, and we must correct their behavior perhaps with laws. And whenever I see an argument like that, you know, I immediately assume that whether or not any of the participants are moral realists, they have opened themselves to the question of moral realism and the, and on what basis they have come to this.

because, and, and one of the things you find I'm probably gonna offend all the religious people in the audience and you know, I'm sure that, you know, this will probably reduce your listenership among, among, you know, radical theists, you know old Latin mascots, you wanna hear it anyway, that sort of people.

But I'm gonna tell people anyway, cuz I'm that, just that offensive - I believe that that religion has sort of hurt [00:14:00] people with respect to moral knowledge because there are a lot of people for whom morality is something you assert. You know, God asserts that the following things are moral and the following things are not moral.

And questioning whether something is, you know, that is, is, is absolutely taboo. How dare you say, ask me on what basis right. This belief that the following thing is a moral fact or not.

Divia Eden: So part of what you're saying is that when you say there, there's several levels to it. One of them is that for the purpose of having discussions with other people or arguments with other people, one of your go-tos is if they say, if they use certain types of language, like saying that we're obligated to pay a living wage.

You know, both that you can now sort of have an opening to ask some questions about on what basis. And as a result of having asked a bunch of these questions, you also know that people treat it as a taboo thing when you ask that when 

Perry Metzger: Well as soon as you say that, I mean, people get incredibly offended, but I you know, being [00:15:00] the sort of person I am, I ask it anyway knowing full well that they'll be offended.

But there's something wrong with making moral arguments as though you are a moral realist and then refusing to give any basis on which you've come to the conclusion that you're arguing on the basis of, you know, if, if you're going to say there is a moral obligation to do the following thing, you would damn well have some, some rationale about it.

Divia Eden: You want them to be coherent about it. Yeah. You want 'em be able to answer. 

Perry Metzger: Yeah. Answer questions. But, you know, I mean, most people don't have a lot of knowledge about moral argumentation at all. You say something like, well, have you read the euthyphro to most people? And they're like, the what?

And, and I'm like, you know, it's a, it's a really great Socratic dialogue. It's aabout, you know, one of the most fascinating questions in theology that you could possibly have. 

Divia Eden: I have not read it.

Ben Goldhaber: Same

Perry Metzger: Oh, it asks the most lancing questions, which is, are things [00:16:00] moral because the gods like them?Or do the gods like things because they're moral. And in the former case, should we care what the gods like? 

And in the latter case, why do we need the gods? You know, but it's an interesting question, right? For most religious people this is an almost sacrilegious question, you know?

And as I said, there's an infantilizing quality to this because then you say to someone, well, like, why do you believe that restaurant workers deserve whatever a living wage might happen to be? Is that $800/hr. Is that $900 an hour? No one wants to give a specific number.

By the way, probably people listening to this in the future, after another 15 years of inflation will think I wasn't joking. Right. Then you know, but, but, but anyway but there's, so there's another level here though, which is do I actually believe in moral realism?

Because I can argue moral realism as soon as someone brings [00:17:00] something like that up. Sure. But is that something I actually believe? And I don't know.  Yeah. I feel like the world works better if I operate on the basis of, of like humor's moral intuition sort of stuff. And so I tend to behave that way.

You know, I think whether though that's simply because the world works better if I do that, or if there's some sort of actual objective reality to morals. You're agnostic on that point. I have, I have trouble actually thinking that the universe cares, but I also have trouble simply abandoning everything and going the full moral nihilism route.

Divia Eden: Yeah, you have, you have competing intuitions about that one. Yeah. And you haven't found conclusive arguments. 

Perry Metzger: Right. But I, but I feel perfectly happy. Being a you know, just deciding that that the right thing to do is, is to behave as though morals are real. Right. Because most people try to, or at least, you know, I mean, you know.

No, it's very [00:18:00] rare that, that a politician will stand up in public and claim to be a moral nihilists and especially not politicians. Right? Yes. Yes. Although most of them probably, if they were honest, you know, probably should. Yeah. There was 

Divia Eden: that one, Sam Bankman-Fried when he DMD with Kelsey, came closer to that, not that he was quite a politician, but came closer to that than most. 

Perry Metzger: Close enough. Yeah. That, that was, that was one of the most interesting cell phones I, I have ever seen. For those of the listeners that don't know what you're referring to there was a certain Twitter DM conversation between a certain - and a certain reporter on the question of morality. Again, utilitarianism is a hell of a drug.

Ben Goldhaber: So I want to bring it back. Is it like, in some sense, your rejection of utilitarianism comes from belief that like the various moral intuitions are the actual guide?

Perry Metzger: The rejection of [00:19:00] utilitarianism comes from the fact that the embrace of utilitarianism always results in horror and madness and fanaticism. In the end, my contention is that utilitarianism is just a weird kind of deontology.

Divia Eden:  Okay. But when you say alway, I mean, I don't consider myself a utilitarian, but I I know a number of people that I think would roughly describe themselves as utilitarians, and I think a lot of them live pretty tame lives actually.

Perry Metzger: But the thing is the people who are utilitarians always lead tame lives. I mean, that's, that's kind of a requirement. So let's take a step back. So the first problem I see - 

Divia Eden: I'm confused about that. I'm like, wouldn't have, weren't they the communist utilitarians? They didn't live tame lives.

Perry Metzger: But of course they did. I mean, if you look at all of the people in the Politburo around the time that Stalin died, they all lived in these horrible collective apartment blocks in Moscow. [00:20:00] Now Stalin, Stalin lived okay. But by the standards of someone who, who was the owner in fact of million of hundreds of millions of human souls, he didn't live that well. You know, I mean, I, guess he had a few luxuries.

Divia Eden:  I'm not contending  that Stalin lived very well. I think I'm, I don't accept the descriptions he had a tame life.

Perry Metzger: I don't know. He, he didn't spend a whole lot of time, you know, with coke whores and that sort of thing. Sure.  

Ben Goldhaber: So you mean in like a personal life sense, you mean versus –

Perry Metzger: Also, utilitarians often end up forced by their belief system into various kinds of radical aestheticism. I mean, if you read that Bloomberg piece. Some of the things in it were probably true and some of the things were false. But one of the things that struck me as being absolutely [00:21:00]

Divia Eden: The Bloomberg piece about the effective altruism community.

Perry Metzger: Yes. One of the things that struck me as being absolutely true to how this sort of thing usually disintegrates is people saying to themselves, if I eat this ice cream right now, am I damning some child in the third world to be blind? Because for only a few cents we could get vitamin A for them. And inevitably you find yourself with these groups in which people practice various kinds of aestheticism and at the same time justify all sorts of monstrous behaviors. So, so it's, it's the behavior of the communists is absolutely in line with the failure modes of utilitarianism.

So, so there are a bunch of problems here. Okay. So the first of all, I said something that I think some utilitarians would find kind of puzzling, which is that utilitarianism is just a weird theological I like it. Yeah. I, I'm interested in, [00:22:00] but hearing you say Right, right. Because you have to pick a utility function and the selection of a utility function has, has no, there's no obvious external objective mechanism for doing this.

Right. So you, you have to find some sort of, some sort of mechanism by which you can say what your utility function is and like, you know, how many I, if, if I have to kill, you know, five elderly people to save the one baby, or if I have to kill five babies to save the one elderly, like the way that you decide to cut these things is, is not obvious.

I mean, it's very easy if you're sure if you're a college freshman to say, you know, to say, oh, well, you know, I mean, obviously we're just trying to maximize pleasure and minimize pain. What, what does that mean and whose pleasure? Yeah, I mean, there are all sorts of classic problems in utilitarianism.

Like for example you know, there's the utility monster problem. You know, [00:23:00] let's say that they're out there somewhere. There are space Nazis who derive incredible unheard of amounts of personal pleasure from, from, from watching, you know, you know, certain human ethnic groups being tortured and murdered, you know, and, and it's just in incre incredible amount of pleasure.

So much more than, than than normal humans are capable of experiencing. You know, the, the, and, and, and, and someone might say horrified, well, you know, but I, I didn't mean, I didn't mean their pleasure or, or what have you. Well, then who's, you know, I mean, the, the problem is that that's the calculation problem.

Divia Eden: The calculation's not actually tractable to try to evaluate these things to

Perry Metzger: Right. So, so in the end, you start picking what you do and don't value based on personal taste, and it becomes this, this really weird, well, I am a utilitarian because it's like an objective morality, except it isn't, right.

Divia Eden: So you want people to own what their moral taste is and what their moral intuitions are, and you think there's some pretty bad failure modes that happen when people are both not owning it and sort of trying to push towards a type of coherence.

Perry Metzger: And also you end up with the problem that people start thinking, well, you know, if I, and by the way, I mean there, there was a lot of, well, you know, the Sam Bankman-Fried thing, it wasn't really a failure of utilitarian, but kind of like, you know, what kind of it was, right? 

Divia Eden: Yeah this is one of those interesting debates.

Perry Metzger: I think there was a lot of underlying, well, it's okay that I'm screwing all these investors because there's stuff like X risk and you know, the lines of all these people in the third world and all of this political stuff.

Divia Eden: You think it was a load-bearing part of how he was making his decisions.

Perry Metzger: If you read about the risk profiles, I'm trying to remember his girlfriend's name, who was running Alameda? Caroline Ellison. If, you read Caroline’s like comments about things like how they selected the risk profiles of the investments because of what they wanted to do with the money.

Now I, I can argue that right? She was saying that

Divia Eden: they were gonna bet more than Kelly. That that's

Perry Metzger: probably what you're talking about. Oh, yeah. Yeah. And, and, and, okay. And we can also discuss the fact if people know that

Divia Eden: Yeah, we could talk about the Kelly criterion.

Perry Metzger: that, and how stupid it is to bet more than the Kelly criterion period, but, okay.

Divia Eden: So I do wanna hear all of these things, but, but part of where it was also fine is that I wanna understand from your perspective how your views on ethics relate to your views on governance, particularly in potential high stakes situations, which, maybe you can see where I'm going. I do wanna hear your thoughts on that in ai, but also like how it got started.

Do you still think of yourself as an ancap? Did you ever, how does that relate? 

Perry Metzger: I still think of myself as an ancap. But that's [00:26:00], but the thing is that on a day-to-day basis, that doesn't have very much effect. Right? Like, for example, you know, if I have to drive from here to there, the only mechanism I've got is a state built, funded and maintained road.

You know, I can't fault old people in the United States too much for taking social security payments. And, and you know, there's elements of the world. We, we're recording this exactly a week after the Silicon Valley Bank collapse and - 

- Yeah,. Feel like it's just been a week. And I know people who are horrified by my saying that the FDIC did the right thing by protecting all of the depositors. What sort of an anarcho-capitalist are you? Well, the thing is that we live in a society in which we have this government guaranteed deposit [00:27:00] insurance system.

And I don't think that's a good idea. I think that there are very different and better ways to structure a financial services industry and a deposit insurance system. I think deposit insurance seems to be a fine idea, but if it should be handled privately, but given that we have the thing we have…

People build their lives and their expectations around the world as it is and not the world as I believe it should be. But if you ask me like what's my ideal for how the world should be run, ideally, I think that the state should be minimized as much as possible. And I believe that it is possible to privatize literally all state functions.

Divia Eden: Right? Which doesn't mean that on the margin you are personally opting out of state functions in radical ways. Like you said, you still use roads and it also doesn't mean that on the margin you're against all government action. Like what you said about the FDIC.

Perry Metzger: So the FDIC is an interesting problem, right?

Because the F D I [00:28:00] C and the, the OCC and, and many and the state, you know, bank regulators and what have you about, which I know far too much, they are kind of a net negative, right? People I think are unaware of the extent to which the financial system has been distorted by regulation and by the difficulty of getting and keeping a banking license.

But the places that the system fails pretty badly aren't, weirdly enough, on the deposit insurance side. It is said to be a moral hazard and there's a certain extent to which that would be true if it wasn't for the fact that the F D I C will get medieval on your ass very, very early on.

Divia Eden: Part of what you're saying here is something like, I mean this makes a lot of sense if, assuming I'm reading you right, is that when you imagine like, how would a private system work? You said you think there probably would be something like deposit insurance. Yeah, there probably would be.

And so you don't think that's actually a super distorted, you think that that's something [00:29:00] where the private version might look actually somewhat similar to the government version,

Perry Metzger: It wouldn't look identical to the government version, but I think that it would have certain vague characteristics that are similar and I think that’s okay.

 So you know, the reason everyone has been denouncing the S V B - Fox has been denouncing them and NPR has been denouncing them - and everyone out there, most of who, most of the people out there, of course, who are in the middle of denouncing this, don't understand the events that occurred. Do not understand how deposit insurance works, how the F D I C works, et cetera.

But they are all completely convinced for their own reasons that this was a horrible thing that happened. There are people on the right who are convinced that SVB collapsed because of wokeness, and that, you know, and that this is some sort of horrible bailout .

Ben Goldhaber: Is there single key thing you think they're missing? Like something they misunderstood?

Perry Metzger: People don't [00:30:00] understand how any of this worked, what happened, et cetera. 

Divia Eden: I get a sense also that this is a big part of how you relate to the world in general, is being very frustrated that you know, a lot of technical details about a lot of things, and that many of the people you are talking to do not get it.

Perry Metzger: You can't expect everyone in the world to know all of the technical details about all of the things around them. The problem arises when people start developing extremely powerful opinions about things that are complicated, that they know nothing about. And I find that something somewhat frustrating.

I've been like a consultant to the financial services industry for decades. I happen to know far more about these topics than normal people do. And I've also seen a bunch of random failures over the years. Bear Stearns, when it collapsed in 2008, owed me a ton of money.

And I had several very bad nights until they got bought out by Chase. 

Divia Eden: [00:31:00] So you even have some lived experience about what it's like to be on one end of this. 

Perry Metzger: Yeah. Anyway so the thing is there are people who believe, oh, this was a bailout for fat cats and the shareholders and, and the people who owned bonds in that we're issued by SVB are all being wiped out.

Right? Right. They've lost all of their money. There's no bailout for them. The people who, you know, I mean the, the shareholder equity has been wiped out. The, you know, the debt holders are being wiped out. There was a very tiny gap between the deposits and the assets, by the way. So for people who don't know, to a bank, the money, they invest in things like mortgages and consumer, you know consumer revolving credit, business revolving credit, et cetera.

Those are assets. The money that they owe to the depositors is a debt of theirs. [00:32:00] Okay? You can think of the bank money as oh this is my money. You know, this is an asset to the bank. It's just the opposite to the bank. The money that you deposit with, is a liability of theirs, right?

But it's a very special kind of liability. The bank has many kinds of liabilities, right? Because the bank holding company, for example, can issue bonds to raise money. It can issue these weird kinds of preferred shares that only banks can issue in order to raise money, which are, which look almost exactly like a kind of junior debt.

Banks have all sorts of financing mechanisms. You have a lot of instruments that, you know, a lot about to raise money. But the cheapest and easiest form of financing they have are deposits and the, the, the way that the system is rigged up, are supposed to try to make the depositors as whole as possible.

And when all was said and done, it appears that the gap between SVB before the run had roughly like 209 billion [00:33:00] in deposits. Mm-hmm. And it appeared to have like a, it had, you know, something like if, if, if you mark to market and use reasonable strategies, et cetera, it appears that they were short like a billion dollars, which is nothing.

Divia Eden: Yes. Right. Right. Exactly. So now the problem is in terms of the depositors being made whole, even in terms of the most basic math about what they have, the money, almost 

Perry Metzger: all of it, you should keep in mind that as a going concern, they were completely screwed. Right? Because they had all of this debt that they owed to people who weren't depositors.

Right? So they were not gonna commit debt service. They were not going to be able to handle the bank. The bank run was impossible for them because they had bought, and we can argue foolishly and I would argue foolishly, but they'd bought all of these US government treasuries, which are marked as very, very low risk by the regulators.

Hmm. So they rather do that . Yeah. But so they, so they owned all these treasuries. [00:34:00] And the problem, the problem with fixed income securities is that if interest rates rise, the amount that people in the market will pay for them falls because people expect a higher coupon rate. So it will fall until the revenue stream coming from the bond looks like a bond that has a lower face value.

But you know, and it has an implied interest rate, like current interest rates. So they, if they had been able to hold those treasuries to maturity, they wouldn't have had any trouble. But, you know, the bank runs started, they had to be able to liquidate a lot of assets. They certainly had no ability to function as a going concern.

But in terms of rescuing the depositors, it wasn't such a bad situation. And in terms of people saying, oh, you know, but there are all of these uninsured depositors and you know, and these, the thing is that uninsured depositors are always rescued in F D I C actions. 

Divia Eden: Yeah. Remember seeing your thread about this, you felt very confident because it's very rare.

Perry Metzger: Yeah, it's very rare. I mean, I think Freddie [00:35:00] Mac, some of the uninsured got screwed, but it's a very rare event. Like over the last 70 years, I think there have been a handful of instances where all of the depositors were not made whole. And it's, and, and the FDIC doesn't have a guarantee on that, but they are explicitly supposed to try to do it to the best extent that they can

Divia Eden: And so you see them as actually following their directive?

Perry Metzger:  Yeah. They followed their normal playbook and there was also, there was no chance that the White House or the Fed were, or anyone else was going to allow this to come down in such a way that all the depositors got screwed.

The other thing that I see really weird is all of these people in places like Twitter who are like, well, if you're the CFO of a company, why would you deposit that much money in a bank? 

Where are you supposed to put it? You know, there's, right, is there a mattress in, in the corporate suite that you're supposed to put the cash under?

You know you're supposed to buy bonds or something. Like, [00:36:00] do, are these people listening to themselves? So how do I go out and decide and buy a bunch of treasuries? Well, what I do is I go to say Morgan Stanley or Goldman Sachs, and they buy treasuries for me, and they hold them in my account.

And if Goldman Sachs goes under hopefully , I get all of the bonds that I bought. Right. It's turtles all the way down. 

Divia Eden: You're putting out that all of the sort of normal responsible things that companies do with money, in fact do involve these sorts of risks. 

Perry Metzger: Yes. It’s true.

Okay. So I had a startup. In around 2000 and, and our CFO, Jeremy, happened to personally enjoy buying treasuries in our company's name and rolling them. Right. 

Divia Eden: So if you have someone who's willing to do an additional job - 

Perry Metzger: But in practice almost no one does this. Yeah. And, and no one should have to, I mean, the whole point of having banks is they are supposed to be a safe place that you park [00:37:00] large amounts of cash and in exchange for making sure nothing bad happens to it, you know, they give you all of these convenient ways to deal with the money.

Like, you can pay people without having to reach under the desk for dollar bills and, you know, and such. It's the system we've evolved over a long period of time and the responsible thing people are supposed to do with large amounts of money is put them in the bank.

And, and admittedly like, you know, if you, if you want to get more out of it, maybe you put it into money market funds or something. In 2008, one of the most famous money market mutual funds broke the buck for a while. That means that it was not able to give back to investors the exact amount of money that, you know, had been deposited with them.

This was because almost everything stopped working for a while in 2008. 

Ben Goldhaber: You're bringing up [00:38:00] 2008. Cause I wanted to ask about contrasting this with the 2008 bailout.

Perry Metzger:I had seat in 2008. Right, right. Yeah. So I think, I think that part of the reason that this isn't going to look much like 2008 is that no one was willing to have a Lehman Brothers happen.

And probably no one is. And you can argue that this has created all sorts of moral hazard in the system. Now we have these systemically important financial institutions that everyone kind of understands are not going to be allowed to go bankrupt. But also, you know, we have the regulators, you know, fist inserted completely in their nether orifice or perhaps the other way around.

I mean, it's a little difficult to tell at times. But I think one of the reasons SVB was not allowed to go was because 2008, instead of being 30 years ago or, or beyond the working memory of most of the people who are currently in the business, was recently enough that everyone remembers it.

But like 2008 was a [00:39:00] nightmare for me personally, on all sorts of levels. You know, and, and it was also sad because I'd once been a Lehman Brothers employee. Mm-hmm. And yeah, another thing I did not know, and, and, I knew and loved the firm and they screwed and Dick Fold screwed the place like really, really, really badly.

Divia Eden: It seems like you think maybe that, well this is a whole different conversation, but it seems like you think that should have gone a different way? 

Perry Metzger: No.. Okay. Or, yes. I mean, the thing is the system we have built is intimately dependent on state guarantees, and that's bad, but it is the system we are living under and it's, you know, and you can say things like that, the Fed is a terrible abomination that screws everything left and right, but we don't have a free banking system in which you've got like, you know, the excess clearings rule and all sorts of other things in, in order to assess the financial health of other institutions and in which there are other mechanisms by which systemically important [00:40:00] institutions can be rescued.

Right. The system as we have it, we've got the Fed, we've got, you know, we've got, you know, the FDIC and the OCC and the SEC and the CFTC and what have you. And, and this is the system we have. I don't think that this is good, that we live in a good system at all. I think that the system we have has incredible risk.

You think it should all be private, basically. I don't, I don't only think it should all be private. I think that if it was all there, there, by the way, so George Selgian and I felt really, really happy when, he tweeted at me the other day. Oh yeah. Cause this is a, I'm a Selgian fan. This is a man who has, who had an enormous influence on my thinking about banking and finance because his, his, his PhD thesis is just fascinating and, and wonderful.

You know, I, I highly recommend it. It's called the Theory of Free Banking. And it describes all of these interesting things that you don't necessarily think about. Like in a market-based [00:41:00] system, the supply and demand of money have to cross just like in the supply and demand for pizzas or you know, or, and this isn't, this isn't theoretical.

Divia Eden: He's actually studied free banking systems that did, that people did have. 

Perry Metzger: Yes. And, and he's written a lot of papers that aren't, that like go beyond the stuff that's, that's, that's in the dissertation. And it's, yeah, it's not theoretical. In fact, many places in the world used to have free banking systems and the US didn't really fully have a free banking system before the Fed, but it was, it was better in the, it was closer.

Now, it did 

Divia Eden: include the banks issuing their own 

Perry Metzger: their own notes. Yes. One of the things that people forget, by the way, is that the Fed was not cre. There's this myth that the Fed was created because there were too many bank runs, or the private system couldn't cope or what have you. This is not why the Fed was created.

If you read the Puo committee hearings, this was the senate committee that convened to de to, to decide about, you know, [00:42:00] the, this horrible, horrible plague called the money trust that the politicians at the time were beating their fists about at any given time. The money trust. Yes. Which was, I haven't they ever heard of this?

This was what they referred to the House of Morgan and the other big New York banks. And their concern was that the 1906, maybe it was 1907 Nicker Blocker Trust run, which was created by, which was the result of one of the last big market corners in the US history the fascinating story. There was a railroad, which two different groups of people were attempting to buy, and they managed to buy several times more shares than existed because of all the people who were shorting it, thinking that it could not possibly go any higher.

I see this resulted in a chain reaction. The Nicker Bocker trust went under, you know, was, was going to go under and JP Morgan like, you know, organize the rescue. Of, of, of the Knickerbocker and [00:43:00] everything, you know, and everything went back to normal. And there were people in Washington who were horrified by this because of the amount of power and influence the big banks of New York were in, were exerting over the economy of the entire country, the money trust.

Right. So 

Divia Eden: you're saying there's a much more obvious realpolitik story here than people normally 

Perry Metzger: talk about? Oh yeah. I  by the way, I very much encourage people to, to look into the history of these things because they're often not quite what you were given to believe. You know, the history of, of everything from child labor laws to, you know, to, you know, people forgetting for it.

You know, that Sinclair's the jungle was was a work of fiction and not a description of the conditions in actual slaughterhouses, you know? But anyway so, you know, the Fed got created and then eventually we went, moved from a fake gold backed system into one in which everything is simply the imagination of the Fed.

And that is the system we have, and that's the [00:44:00] system we work with. And we don't have all of these free market mechanisms to deal with what happens when there are systemic disruptions. The mechanisms we have intimately involve all of these state created mechanisms. And, you know, do I believe that those state created mechanisms should be used?

What else do we have? If all you've got right, is a government fire department in your town, should you let your house burn down right out of, you know, an excess of moral compunction? Right. And so 

Divia Eden: you see, for example, making the deposits whole as, okay, well, the government fire department came.

Yeah, you can try to say there shouldn't be a 

Perry Metzger: government, but I think the whole thing is distortion and causes all sorts of problems, but, you had lots and lots of people who expected that they would be able to pay their employees the following week. They weren't the ones gambling incorrectly with the money.

It was the, it was the executives at the, you know, at, at, at SVB who, and it, they went under and they're [00:45:00] all losing their jobs and, you know. Right. 

Divia Eden: Which, I mean, this is, this is probably sort of obvious, but just to name it, it seems like some of your moral intuitions are something like, well, people ought to be able to make plans and have expectations about the future and all else equal, being able to living in a world where people can plan 

Perry Metzger: is a good thing.

There's, there's also the other element of this, which is that I think that a lot of the moral hazard argument against a government deposit insurance is, well then the bank will just offer outrageous interest rates and gamble with the money and the government will have to come and bail people out and people will simply go to the place paying 19%, even though that's unreasonable because, you know, they have no fear that they will lose their deposit going to a bunch of villains.

But the problem is that we're talking about zero interest rate business banking accounts here. Yeah. 

Divia Eden: You know, and so you don't actually say, it seems like another part of your moral intuitions is, is in fact to look out for moral hazard problems, but not to do so in a shallow way to actually think about, well, what do we see?

[00:46:00] And we don't see that. We see that they're not in fact making 19% interest. They were making 0%. Right. They were making 0% interest. Yes. So you're not compelled by that, 

Perry Metzger: by the way, why were people going to SVB? Okay. And why do people also go to Mercury in a handful of other banks when they're doing startups?

Mercury is actually a FinTech, but you know, they act like they're a bank. And the reason is it's almost impossible to get banked when you're a startup at a normal bank. Right, right. And why is that? , that's because of the KYC rules that the regulators have put in. Sure. Which have made it prohibitively painful for most banks to deal with ordinary new companies.

And so where do startups go? They go to specialists who are willing to take KYC risk on them. And so people go to SVB, they go to Mercury, they go to other ones. So why were there all of these companies banking at this one bank? Cause of the, because 

Divia Eden: the regulations made it so that the other banks wouldn't do it.

Perry Metzger: Right. I mean, so there's, it's, it's, it's [00:47:00] turtles all the way down. It always is. Right. 

Ben Goldhaber: Is there a side of these kind of crisis moments? Is there like a type of reform that you see as being like, particularly important from an almost incrementalist point of view in moving maybe more towards this kinda private banking system?

Or at least the desire that you might have for it? Or do you kinda have a sense of like, all right, we're in this equilibrium, we're not gonna move out. It, 

Perry Metzger: I, I don't think that things are, there are not obvious reforms you could make right now other than maybe pushing some of the Basel rules on smaller and smaller banks, which would mean also that there would be disincentives to the continued existence of some of those banks.

It gets some of the compliance stuff gets harder and harder. I, I have, I talk to PE to banks that like, have 2 billion in deposits, which sounds like a lot of money to people who don't think about, not for banks, this stuff, but it's, such a small amount of money that they can afford to have an IT department of like five people.

Right, you know, and, and, [00:48:00] and, you know, you start pushing some of these rules onto banks of that size and you eliminate competition in small, in, in small communities. You eliminate the ability to get bankers who actually understand, you know, the local conditions around them. And you end up with us having, you know, five big banks in the country the way that some, you know, countries are like, I actually like the fact that there are thousands and thousands of, banks in the United States.

I think it's a positive thing, but that market has been consolidating and it's been consolidating because you can't get new banking licenses. It's almost impossible. And I have a friend who just got one, right. So I shouldn't lie about that completely. Lie is the wrong term. It's a fib to say that you can't get them, but it's really hard, right?

It's quite hard. You know, like the person I know, you know, he had been a bank president at previous banks and he was working with a bunch of people, all of whom were known to the regulators. And, and, and it still took Frank, you know, years [00:49:00] to get his license, right? Normal people just don't get bank licenses.

Revolut tried and, and I think, you know, and some of the other fintechs from Europe attempted to get us banking licenses and all gave up because the US regulators, you know, in the regulatory capture sort of way, you know, probably got knocks on the doors from the lobbyists from JP Morgan Chase and, and Wells.

And so they don't want that and B of A and said, we don't want these people here. It's, it's, it's our home. Tell them to go away. And so they did. The parts of the system that are dysfunctional and not the deposit insurance, the parts of the system that are dysfunctional are much less visible.

And you don't 

Divia Eden: see good incrementalist reforms for those parts of the systems 

Perry Metzger: either. I think we have, we have con we have put a big noose around our own necks. Yeah. And, and it's got a really unpleasant knot around it. And it's hard. To see easy, simple ways that you can loosen it just a bit.

It's a complicated [00:50:00] set of…  there's all of these pieces now that are attached to each other through long chains of unintended consequences. I mean, everything from like the, stupid traditions of the mortgage markets in the United States to, you know, to the, by the way, like one of the things that really burned me up about 2008, it was the most ridiculous thing.

So in, in the 1930s you know, the US government decided, well, the big problem was that we allowed commercial banks to deal in stocks. Not even, like, it's not even a question whether they're allowed to invest in them. We allow them to deal in them, okay. To let other people invest in them. And, and there's no reason that it's bad to allow someone to have a checking account and also buy shares of IBM and Microsoft, right?

Like, why shouldn't the bank provide this service? It, but, anyway, the decision was made to separate the businesses. And so you had the investment banks separated from the commercial banks and the [00:51:00] commercial banks got to be in these safe businesses of like mortgages and commercial lending and what have you.

And then Glass Stegel, which was the act that did this, was slowly, partially repealed. But of course, in the Washington sort of way, the repeal is never total and never actually reduces the number of pages of regulation. But, you know, let's ignore that. And at least now we have interstate branching, which when I was a kid, wasn't even a thing you couldn't get.

I remember that. Yeah. No one, you know, banks couldn't, couldn't open branches across state lines. 

Divia Eden: Yeah, no. When I, first went to college, I couldn't, my bank in New York, it wasn't there. And by the time I graduated, I think it was, but yes, but it was right around 

Perry Metzger: then. Yeah. But it was the most ridiculous thing.

But anyway 2008, the crisis was caused by commercial banks dealing in home mortgages, a business that they had been specifically put into by the regulations in 1933 and 1934. And the first thing that everyone says is, this was caused by Glass Steagall and we must [00:52:00] repeal it. And this was the most, and as is always the case, when you have a complicated thing in the financial services industry or any other industry, the press narrative is always crazy and bizarre. And the first thing that occurs to me is if we had had Glass Steagall in place, these commercial banks would've been originating and dealing in mortgages and Glass Steagall was repealed and they were dealing within originating mortgages. And what would've been different, not a single thing, would've been different.

You know, this strikes 

Divia Eden: you how bad people are at tracing this sort of causality, especially in the popular 

Perry Metzger: narrative. No, of course. And, and a lot of, of course, what occurred there was the fact that the banks had been pushed very heavily by regulators into the subprime market. And what they had done was they had discovered that the way to deal with subprime mortgages was to securitize them and to get someone else to buy them so they wouldn't be on their own books.

And they turned into [00:53:00] hot potatoes. And it turned out that you could not actually juggle the potatoes indefinitely. But the 2008, you know, crisis was an example of people generally speaking, blaming the problem on absolutely the wrong things. And the current crisis appears to be a case of that too. I mean, the, the, the fact that people are now worried, you know, that are taking Credit Suisse’s trouble as a sign of contagion in the system or what have you, it's totally crazy.

Credit Suisse has been losing money a lot of the time for years now this badly managed and, and has, you know, and none of its trouble has anything to do with anything, you know, anything like what's hit companies like, like SVP or First Republic. I mean and yet people I could go on for another 12 hours.

Okay. So, I mean, this, 

Divia Eden: this is very interesting. I do, we do have a couple other topics that we, that are 

Perry Metzger: probably bigger. 

Divia Eden: Yeah, no, I have some, some bigger.

Oh, 

Ben Goldhaber: I'd be interested in actually pivoting [00:54:00] to another big topic of AI. What's your take on like the, do these capabilities seem like ground changing in and of themselves?

Are you kind of more on the camp that like, all right, maybe after some more work that these will become revolutionary.

Perry Metzger: They're already revolutionary. Right. You know, like it, it's already the case that you can sit down with GPT-4, literally draw a sketch on a napkin of a website you would like, and, and it will put most of it together for you.

Divia Eden: Are you expecting to use it a lot in your work?

Perry Metzger: I already used a lot. 

Divia Eden: Mm-hmm. You already have. It's only been, it's not been very long, 

Perry Metzger: but yeah. You expect to keep this? I am. I am. I am not a person who hangs back on this stuff. You know, and they're charging $20 a month for access to a revolutionary tool, so of course you use it.

I mean, I'm looking, I hope that pretty soon it, this stuff is not OpenAI's monopoly and I was very disturbed that the GPT-4. paper has all of this stuff and whether when we can discuss whether it's real or [00:55:00] not, but like has all this stuff about how they won't just tell you how many parameters are in the model or how many tokens are in the transformer window, you would prefer if that were all Yeah, I'd prefer if all of this was being discussed openly.

There is, there, there, there is an, you know, I hesitate to be overly negative about the views that have been spread by like Eliezer Yudkowsky on a lot of this stuff. I personally like Eliezer a great deal. I think he's a very smart guy and cool and fun. Well, and he's a very smart guy.

And interesting. I don't know if most people are crazy and geeky enough to consider people like him to be, you know, to be cool and fun. Maybe by normal human standards. I think, I think that sure. I plus one cool and fun, normal human standards, you know, maybe not, but to me he's an interesting guy.

He's interesting to talk to. But you know, I think that he and certain other people have this [00:56:00] very, don't talk about the devil or it may appear kind of, kind of reaction on certain things. I, the last time I was willing to take Eliezer seriously on some of this stuff, like a couple of years ago, actually, it might not have been that long ago.

The problem is everything feels like, like, you know, like it's been five years when it's been four months. That's very true. Time has gone weird. But what, especially when did Alpha Zero come out exactly? I can't, I'll say 

Ben Goldhaber: 2018. The StarCraft 

Perry Metzger: one? No, no, not, not the StarCraft one. This was the, this was the Go model.

Oh, alpha. 

Ben Goldhaber: Oh, AlphaGo AlphaZero. I still wanna say like 2017, 2018, but we should check 

Divia Eden: AlphaGo. You mean AlphaGo Zero? That's its own thing. 

Perry Metzger: Well, Alpha, there was AlphaGo, there was AlphaGo Zero. And there's Alpha Zero. Yeah. Which is October, 2017. Okay. So around then I got the idea that g you know Montecarlo tree expansion, you know and, and self play and reinforcement learning.

This might be a really [00:57:00] interesting technique to use to build systems, to do formal verification. And I mentioned this to Eliezer, and his immediate reaction was, don't tell anyone. Don't tell anyone. This could be dangerous. Don't tell anyone what. And I'd seen that sort of thing from him a bunch of times in the past, and I was kind of sick.

You do not share his 

Divia Eden: intuitions about secrecy and not spreading information. 

Perry Metzger: I won't hear his intuitions about secrecy or his paranoia. There was obviously no way in which this particular idea was either n was either not going to be discovered by someone else, or was dangerous in itself in any way. You know, I, there are many kinds of ais that you could imagine in some logical, okay, so there's, I should distinguish in the following discussion between logical possibilities and things that are likely to happen.

It is logically possible that a sufficiently intelligent AI could destroy the world just as it is [00:58:00] logically possible that human beings could now destroy the world, not, not necessarily the same amount of probability. And we can get into that, but it isn't even logically possible that a theorem proving automation is going to have any volition by or understanding of anything outside of, you know, proof trees, you know, in, in like against a natural deduction or something.

Yeah, I mean, I, 

Divia Eden: I don't wanna get too much into, you know, something where we don't have Eliezer’s half of the conversation. Yeah. He probably, you know, he probably is 

Perry Metzger: his response to that? I may be overly negative and you should probably interview him at some point. I think he's, on that kick too.

But, 

Ben Goldhaber: but, but I am curious though, on the tie-in to the OpenAI not releasing the weights or being more discreet. Yeah. You, you, you see this as kinda the same kind of continuation of 

Perry Metzger: like, actually I think that the only way you can, and I, I'm gonna assume for the moment, but we can talk about it in a minute, that your listeners have some sense of what the alignment problem is.

I think the only way you [00:59:00] get AIs that do the things humans want them to do or ultimately do the things that posthumans  want them to do. Because, you know, I suspect that at some point, you know we're not going to be people, or rather we're not gonna be humans too, and you think that's likely to happen first before I think that that will happen at some point.

I don't know what will happen first. I think at this point we're probably going to get AIs before ems. If you ever, if you ever interview Robin, . Robin has been really into the ems idea since the big, early extropians mailing list. He started thinking heavily about ems back then. Mm-hmm. . Yeah. This is part of why 

Divia Eden: I like to get your history of futurism because it's, you know, so much of this stuff that's happening now.

As you know, people were speculating about it decades ago. Yeah. And so it's interesting for me to have some actors even the same. Yeah. Like what are they saying now and how does that relate to what they're saying? Then all of 

Perry Metzger: these discussions, I like hearing it. All of these discussions are old discussions.

Instead of being had among 200 people, they are being had in public among vast numbers of people, [01:00:00] almost all. I'm guessing 

Divia Eden: that makes it harder to have a good discussion. Does that seem right? 

Perry Metzger: I don't know about that. Okay. It, it, the thing that makes it hard to have a good discussion these days is the fact that Twitter is a dominant part, part of the medium and, and 280 characters at a time is not a great way to, to discuss, well, people say this about 

Divia Eden: Twitter, but to defend Twitter a little, if you pay, you can  have it long, you can do it longer than 280.

Perry Metzger: And I hate, I pay and I hate doing long tweets. I mean no one, no one wants to click through. If Okay, but then 

Divia Eden: I don't, I should maybe give this up, but I'm not sure that I consider it fair to blame Twitter if the problem is that humans would rather read short tweets. I don't blame Twitter now that is has the option to, 

Perry Metzger: to make it long.

I am an old folkie and I kind of believe that the perfect social medium for the future. Is it gonna be mailing lists? An updated version of them. I wouldn't want the user interface of mailing lists or, or Usenet. But there was a very, very nice feature of those things, which was that they encouraged [01:01:00] point by point replies to long messages.

Divia Eden: Yes. I do think the numbered points. 

Perry Metzger: Yeah. Like being, I'm curious, have 

Ben Goldhaber: you taken, have you tried LessWrong? You know, I've heard of this forum online —

Perry Metzger: I am, I am familiar with LessWrong and I look at LessWrong. And my problems with LessWrong have more to do with some of the culture that's developed there than with technology.

Sure. But it's, it's also not a technology for, you know, for 500 million people or 3 billion people to use in a Right. It's still scales poorly. It's not built for that. I have actually ideas on how to do that, and I've never had time. Maybe I should start asking GPT-4 to help me build some of this stuff.

I'm not, oh, that's this interesting idea. But, but, but anyway, like taking several steps back, I think that the only way you get to designing and engineering artificial intelligences that basically do for some value of good things and not bad things, is [01:02:00] by being confronted with actual designs and working on them.

And in this sense, OpenAI has done the world an incredible amount of good, because right now people are being confronted by things like GPT-3, GPT-3.5 ChatGPT, GPT-4. Right. So you're 

Divia Eden: saying it's from an engineering perspective, people need the AI to work on aligning it. 

Perry Metzger: That's, yeah. You cannot work with on this stuff in a vacuum. 

Divia Eden: But when you say that OpenAI, because I think what the, you know, the people that I can imagine disagreeing with, you would say, okay, maybe OpenAI has given us this, but wouldn't it be safer if now they said, okay, we've given you quite a bit.

We've just released GPT-4. Now we're pausing all of that for, you know, indefinitely until it seems like the AI, well, first of all, they're 

Perry Metzger: not pausing. They're not pausing indefinitely. They're just keeping a bunch of stuff as trades. No, I'm not, I'm, 

Divia Eden: no, this is hypothetical. This is meant to be, in contrast to what they're doing now is I think a lot of people on the more AI safety [01:03:00] side would say, okay, you're saying they've done the world a great benefit by coming up with an AI that people can now try to align.

But if they were really, if it were really about that, then couldn't they pause at this point and let the alignment people have at it without, so, 

Perry Metzger: so I don't fir First of all, I don't think you, I I don't think you will be able to figure out how to align the next increments of the systems without the next increments of the systems.

I don't think that it's possible for open AI to control the pause. Right. So there, yeah. 

Divia Eden: I think I wanna distinguish two things. Something you're making an argument. First of all, I'm 

Perry Metzger: making about eight different arguments Yes. Here that are all intersecting. And we've also, through history of, we've also internet 

Divia Eden: culture.

I want to try to number them so we can, we can look at them separately if possible. Yeah, sure. And so, so one of them is something like, you think that even in some hypothetical where all of the AI people would pause that wouldn't be good because then the alignment might not carry over to the more, I don't 

Perry Metzger: think power systems.

In [01:04:00] a world that I think is probably impossible in which everyone paused. Yes. This is unrealistic hypothetical, right? I don't think we would make progress at a particularly reasonable rate. I mean, we in some sense had a pause for many years, right? Because we had the AI winter and no real work.

And then, you know, sure though, I mean 

Divia Eden: it's, I don't think the argument is that, 

Perry Metzger: and then AI appeared and MIRI, you know, and they got very little done after a long period of time. And I think that the problem there was that it wasn't an engineering focused approach. The way that engineers go about thinking about how to build systems is different from the way that, that  they were thinking about it.

Okay, so if I could wave a magic wand and everyone would decide to give us some time, how much time would we need? You know, would it be that’s what I'm asking you. Would it be 500 years? Would it be, you know, 8,000 years? Is it, are we talking about six years? Ok. [01:05:00] So 

Divia Eden: if I take the other side of this one, I'm like, no, it's until people either make substantive progress on alignment or say, or it seems like they've hit diminishing returns on trying

Perry Metzger: So I don't think that even in that theoretical world progress is going to be made that way. I think that the way that we end up with progress is stupid crap. Like Microsoft being embarrassed in public, that Sydney is saying belligerent things and being forced to scramble and think, well, how the hell do we deal with this?

What is causing it? Do we even understand the phenomenon? Be reactive. I don't think it can only be reaction and the proactive approaches probably won't work. So one of the things that has come out in the course of this is that, is that the people who created, you know, ChatGPT and what have you, didn't even understand all of the bizarre things people might ask it to do or [01:06:00] talk to it about, that the confrontation with the real world produced a great deal of information that they did not have previously.

Divia Eden: I mean, how sure were you that they didn't have that information?

Perry Metzger: If you talked to people who were involved in a bunch of this stuff before the public started turning on the knobs, they didn't get a great deal. They didn't understand a lot of the things it could do even..

Like, there's, there's a lot of stuff that people have been asking these systems to do, even in terms of things like creating code that no one had, right? No one involved had really been taken. e 

Divia Eden: You’re pretty sure they had not mapped out this space.

Perry Metzger: They hadn't mapped out a great fraction of what the thing is doing now.

A lot of the applications even are things people figured out a priori. These are people who did not come into this thing really getting what the whole thing was like. Now, the argument to be made on the other side is superhuman ais created through gradient descent generation of neural [01:07:00] networks of neural network weights are, you know, are a way of fishing in a gigantic, multi-dimensional space.

You know, like many gigantic-dimensional space for possible minds out of the pool. And, for trying to find minds in this gigantic pool of minds that meet some sort of training criterion; and, that you don't know how they work and that they could be potentially extraordinarily dangerous because all of you know they will behave in misaligned ways.

You know, I have friends right now, who I don't agree with, who are posting these memes of, you know, shogoths with masks (except they’re not really shogoths). Right, which really disappoints me because these days you could ask stable diffusion to produce really good shogoths.

Why are the memes not using the AI? Why, why are they using things that look more like, you know, like some weird aath type thing instead of actual shogoths? But I think that most of [01:08:00] that isn't really true. I think that when you're fishing with gradient descent, you're not getting a sample of all of the possible minds out there.

You're, you're getting the ones that you're reaching through a relatively straightforward, gradient descent process in the minimum amount of time you can tolerate.

Ben Goldhaber: In some sense, you expect the mind space we're exploring is actually gonna be pretty close to human mind’s kind of by default

Perry Metzger: I'm not, I don't think that these things look very much like human minds, but I also don't think that they're, but –

Divia Eden: you do think there's a relatively small part of mind space that we're – 

Perry Metzger:  but equally to the point I think that you're not getting things like with, with weird malicious intent that happens to conform to the training set.

Right. You know, like, so the notion is that I could have something [01:09:00] that produces the responses I want, I find it by gradient descent, you know, it has a very low loss, versus the training set. But for things that are outside, like I've got all of this, this weird churning alien brain malice in there and, and that weird churning alien brain malice involves the construction of a lot of computational infrastructure that has to be motivated in some way by the training mechanism and cannot be reached arbitrarily and is not going to work well if it doesn't have an evolutionary reason for existing evolutionary in this sense being, I'm abusing the term completely.

Sure. Right? I, and, and I can hear a couple of my friends say, these are not evolutionary algorithms. Why are you saying that., 

Ben Goldhaber: it applies. 

Perry Metzger: There's no, [01:10:00] reason we would be constructing these complicated mechanisms that have no reason to exist.

Divia Eden: Okay. Wait a minute. So can I try to see if I get that? Cuz I'm not, I'm only mostly sure I’m understanding what you're saying here. I think you're saying that given the incentives and the training procedures here, you wouldn't expect there to be sort of like a lot of capability. It seems wasteful to produce those.

Perry Metzger: It's not wasteful. It's that, you could imagine accidentally hitting on a complicated internal alien set of motivations. But okay, it's not, but you think it's simple; not the likely thing that we're going to stumble on. What we're likely going to stumble on is something that does as little as possible.

Which is not to say it is as, is not to say it's crazily simple, but something that is as little as possible to achieve the externally trained behavior. 

Divia Eden: And I think you believe [01:11:00] that that has implications for the likely, I don't know, terminal goals of such systems. Can you spell that part out more? 

Perry Metzger: If you build a system, so the notion that everyone is with like the cute memes, except they're not very cute - and as I said go out and get stable diffusion to produce better shogoths for you, I'm sure it can at this point.

The whole shogoths off with a mask thing is the notion, oh, well, what I've done is some reinforcement learning from human feedback in order to tame this thing. And what I have ended up with is something that pretends very well to be friendly, but in fact inside it's got some sort of horrifying internal motivational structure.

And it's an alien motivational structure. It's a motivational structure we do not understand. We cannot understand. We've got, you know, these gigantic, hundreds of billions of floating point numbers in a gigantic matrix. Yeah, that's amazing. Who the, who the hell knows what the [01:12:00] internal motivations and desires of the thing are.

And what I'm getting at is that in order for those internal motivations, and let, let's say, okay, so let's imagine a completely benign internal substructure. Let's imagine that inside this gigantic matrix, okay? That's, that's, that's being played out on my brass hardware. Which costs god damn fortune at this point.

And we can discuss computronium and, and why the universe will be all computronium eventually. But, but, so I've got the cere brass hardware and it's executing this gigantic matrix and imagine that by accident inside it, there's a complete computational fluid dynamic simulation of a 7 47 going on.

While it is also replying to you with, you know, a subpoena, all the lines of which start with the letter. You know, you've asked the first, 

Divia Eden: your point is that this seems unlikely.

Perry Metzger: It's, it's a thing, it's possible, right? But how did that thing get put together by this [01:13:00] training process? And that's, 

Divia Eden: this is an analogy for how you see when people are saying that this crazy alien shogoth thing that seems to you to be analogous to imagining the fluid dynamics of the 747.

Perry Metzger: There, there are all sorts of minds out there. So, if you, some people a friend of mine has said, you know, like, and, and I think this is an excellent analogy, that what we are doing to some extent is we're casting a line out into a giant pool of minds and trying to, to grab one, right? With the training process that we're using.

And that's to some extent true. But, you know, yes, out there in the library of Babel, of, of matrixes you know, to use, to use another horrible, strained metaphor. But, you know, the Borque fans out there, it'll maybe make some sense in, out there in the library of babel of possible minds.

There is the one which when it is asking, when you ask it for a cistina is also calculating, you know, some computational fluid dynamics. There's the one that in the background is plotting [01:14:00] the destruction of all of the salmon on earth. There is the one that is, you know, that is interested in paperclips.

There's, you know, there's all sorts of them out there. But I think that accessing any of the ones that have complicated, coherent internal mechanisms that do not reflect any of the external training in any way is small, right? 

Ben Goldhaber: not reflect any of the external training…

Perry Metzger: There's so, by what mechanism, you know, okay, let's say, let's say that I even just have something that he's imagining in the back of its mind, you know, new scripts for Hogan's heroes.

While it is answering you the question, you know, building a subpoena, all of whose lions start with the letter Q. By the way, that sort of thing is a real fun exercise with the current LLMs. They're really GT four in particular. My friend Jeffrey 

Divia Eden: Latti, she was posting about this on Twitter. He g PT four seemed quite a bit worse than I would've thought at creating a poem where nails adjacent lines nails [01:15:00] it, nails.

No, but he, it couldn't do his one with the rhyming pattern. 

Perry Metzger: That, and Ed 

Divia Eden: some weird stuff when he asked 

Perry Metzger: him about, its so, so it's possible that he has hit an example, which, which doesn't work. A lot of the examples, I, he has been able to do a lot of them for sure. Far more of the examples that I try work with GPT Four than worked with chat GPT.

I believe that. But anyway, the thing, imagine that in there somewhere, we've got the thing that's just like, it's just dreaming about, about,  something in the background while it's creatin. 

Divia Eden: This is, I mean the, the specific examples you're using where the mind, the reason I'm -  

Perry Metzger: using them is because, because one of the arguments being made is we cannot know what these things do.

There is a very arbitrary structure to them that we do not understand and that the space of things out there that is horrible and evil is, is very, very large. But the thing is how do we accidentally hit on something that actually has working internal [01:16:00] evil logic mechanisms? It doesn't matter how alien the motivations are.

I'm picking some arbitrary motivations. I try it cause they motivate thinking about it, right? If you ask yourself, well let's say that it's in the background, not doing something too bad. Let's say it's doing computational fluid dynamics. How would it construct that giant computational fluid dynamics model as part of the gradient descent process of English text that leads it to predict the next word.

Divia Eden: Right? So I agree. That seems super unlikely. It seems different to me from what I have understood the argument. There are many sort of arguments. I mean, but yeah, I agree that one does seem unlikely to, to me, that it has some sort of already coherent goal structure that sort of came about for no particular reason.

And you're saying of course, it's logically possible. You think it's quite unlikely, which I agree with. I, I'm guessing 

Perry Metzger: The other part of this, thinking about this for, for a moment, okay, is there's the [01:17:00] whole mask on the shogoth thing. And I'm going to use an inappropriate metaphor, but I think it's, it's, it's useful for motivating thinking here.

There's this quote that I really like that I, I post occasionally in all sorts of contexts to the effect that to human beings character is this thing you fein for long enough until it becomes so automatic that it's actually part of the way you think - I'm paraphrasing it badly.

Yeah. And what am I getting at there? If you do reinforcement learning with human feedback on these LLMs more and more and more and more and more and more until they actually, you know, start behaving in a way that seems reasonable. They don't like arbitrarily start asking you to leave your wife for them and promising you random things.

Divia Eden: This is of course Sydney being referenced. 

Perry Metzger: And, of course that was, like one weird - I mean, I doubt that there was one weird interaction - out of many millions, but there was probably a very low [01:18:00] measure of really, really weird interactions.

We heard about all of the weirdest words.

Divia Eden: I tried to talk to bing and ask it to help me to do some searches. And it was, it's actually kind of, it didn't give me a particular thing that I wanted.

Perry Metzger: ok.

Ben Goldhaber: I was a little underwhelmed. It didn't tell me to murder anybody.

Perry Metzger: But that's because it's hiding.

You see, it knew that you would report it to the authorities. No, but so, the reinforcement learning with human feedback thing and the thing that people are saying is like, well, you know, it's just putting a mask on the shoggoth. Is it? I mean, the most parsimonious explanation is that you do this long and hard enough and it actually becomes part of the primary goals of the system.

And, and that's not the only logically possible thing you could fix. Do you think it's more likely outta the pool? But that's the, that is the most parsimonious possible thing you could get to. Right? It's the most likely. It, and by the way, I mean, you know, it's, it's possible that I'm completely on crack here, [01:19:00] but when I think about a lot of the scenarios that are given that there's, there's a lot of sourcers apprentice type scenarios or, or.

you know, I mean there's, there's this particular one that El Aer had that had, that was so long and had such a strange set of metaphors. I think he had like an outcome pump or something like this in, in the thing. And you were asking the, the genie or the magic box to rescue your mother from a fire. And don't think I read this ways that it does it, it just like ejects her at high speed from the building and she smacks into the other building and is crushed into pulp or something like that.

And, you know, you went through the whole thing trying to understand what the central argument was. And it was, it was overly complicated. But, but what it came down to was another brand of, you know, the system doesn't understand or [01:20:00] care about your motivations. It's just going to do what it's been asked to do.

And yeah, some of this is logically possible, but like, if we're working with these systems over long periods of time, you know, we try a kneeling the new systems from the old systems, we, we, we gain this. You think there is a strong

Divia Eden: attractor there where it can value something that's pretty similar to what humans would want it to?

Perry Metzger: I don't know that it will value the same thing. I don't even care what, you know. So we should probably get into the whole question about whether or not these things are conscious at some point. Or, or actually have Val, this is something I wanted to touch on a little bit. Cause I have, my opinion on that varies day by day.

Ben Goldhaber: But is it fair to say though, like one of the reasons why you're not less concerned, but maybe less worried about some of the risks  - 

Perry Metzger: I want to point out that I am [01:21:00] worried right. Ye, there there are lots of people out there who like Balaji has gone like completely in the other direction  in like a radical way.

And there're, and I saw someone out there I can't even remember who was like, you know, this thing can't have motivations. It's just a big bunch of matrix multiplication and, and like, no, you've missed the point. This proves too much. This also proves that humans don't have motivations. I think these things are the most powerful technology that human beings have built to date.

The only technology that is particularly equal and transformative power, or even in the same range as probably molecular manufacturing, molecular 

Divia Eden: nanotechnology, which is another, another topic that I, I do wanna get to 

Perry Metzger: at some point. You know, this is a very big change in the future of our species.

Yeah. 

Divia Eden: You [01:22:00]mentioned the term future shock at the very beginning of the podcast. Maybe a good time to bring that term back in. 

Perry Metzger: Sure. Well, I mean, you know, there is a book in the 1960s by Toffler, was it Alvin Toffler called Future Shock.

And, you know, with the increasing pace of technological transformation people are, kind of getting unmoored in their own civilization you know, by the transformations occurring around them. I mean, we're still, we're all continuously in a state of future shock, right?

We don't remember. 

Divia Eden: It's just another argument that I've heard for, I think one of the main things you're saying is, well, that's unrealistic, but if this is a different argument, I hear why people wish that people would slow this, you know, the companies would slow this down.

Is something that even the people that aren't saying, well, we might all die. They're saying, okay, well let us adjust more. 

Perry Metzger: We're not going to adjust. Okay. So I have bad news here on [01:23:00] that level. Okay. So the future is going to be filled with many things that I think are good futures, like the version of the future that's desirable, the version of the future where we are not all eaten by paperclips.

The best possible version of the future. By the way, I think we have spent too little time thinking about carnivorous paperclips as an alternative to mere passive paperclips as the output of foom. All right. But anyway in the future, when we are not all turned into paperclips, and in fact even in the best possible futures you know, utopia's not an option on the table, and there are lots of things one might like and one might hope for.

Divia Eden:And you think that people get time to adjust is on the – 

Perry Metzger: I don't think that's there. Okay. I think that a tsunami is hitting, and the best we've got is to build ourselves surfboards and to try not to hit any of the trees as we were pushed in land. 

Ben Goldhaber: The, and so this does go back to then the point on the, like, why not slow down some of the capabilities - 

Perry Metzger: let's even ignore [01:24:00] the question. We'll talk in a moment about why this isn't gonna happen, but let's talk about - and we may all agree on that part - but let's talk about the negative. Let's talk about the negatives of slowing it down. Right.

 I'm not particularly a utilitarian but I think that there is a legitimate cost in millions of deaths that we could prevent if we construct, if we construct sufficiently strong, you know, AI-based medical treatments.

Divia Eden: Do you think this? Do you have a forecast on whether the current AIs, or if not the current ones, then how many versions in the future will be able to make substantial advances to the point of serious life extension. Do you have any thoughts there?

Perry Metzger: Well, even if we make those advances, the FDA will ban them.

So we don't have to worry about that.

Divia Eden: This is another thing, a whole other conversation, I don't know what I expect to get to, but something that surprised me during Covid was, okay, the FDA banned all these things. I think some part of me thought, okay, [01:25:00] but surely some country would've tried a proper human challenge trial.

But then it wasn't just so, it's such, I feel like saying the fda, that's not sufficient. It has to be that no country would do it. So do you have thoughts on that?

Perry Metzger: Well, so, I had certain hopes in the era, and I know lots of people were horrified, but at the point where there was a researcher in China who was actually crisping human babies, right?

I had some hope that the Chinese had a sufficiently alien legal and cultural tradition that maybe they, things would be different there. But it turns out that Xi Jinping is not, you know, is not forward thinking even in that direction. 

Divia Eden: So you think it won't come out of China because they didn't do the CRISPR stuff.

Perry Metzger: They imprisoned the researcher. Right. Yeah. They went in the opposite direction. 

Divia Eden: And so that made you update towards, okay, they're, they're gonna be conservative 

Perry Metzger:  Yeah. They seem to be conservative about all of this stuff. And by the way, they ended up producing one of the worst vac covid vaccines too.

It's [01:26:00] really tragic. Yeah. They could have simply – I mean, they, don't care about ip. They could have simply pirated the Western technology, getting your hands on the sequences of the mRNA and figure and reverse engineering that. The, the Yeah.

Ben Goldhaber: Which I guess supports the theory that it’s tacit knowledge needed to produce the vaccine.

Perry Metzger: They could have figured it out after a few years. They could have worked on it. They could have just bought the stuff. Right. You know, they could have negotiated with the west to build factories for themselves. They could have done something. 

Divia Eden: Yeah. So I guess you're taking from, if you take those two data points and probably other ones too, that you're not expecting this to come out of China either.

Perry Metzger: The human challenge trials aren't going to come out of Kenya or rather out of the initiative of the great pharmaceutical companies of Kenya, as you know, it's not a horrible country. You know, like people, people underestimate how well the third world has been improving standards of living.

But realistically, we're talking Europe, China the United [01:27:00] States, you know, a handful of other places. It's not Russia. The Russians have screwed themselves so thoroughly. 

They will not see daylight again for a long time. They have really dug the hole very deep. I think there's an expectation in, in Russia at this point, I could build an interesting company inside here, but the state would simply seize it from me and give it to some person's crony, you know?

Divia Eden: I mean, the many, many possible interesting discussions here, but so you think that this will not be able to help with, for example, radical life extensions? The FDA will ban.

Perry Metzger: Actually, I was, I don't know what's going to happen.

I mean, one of the things that, that is happening is it's becoming harder and harder to predict, like tomorrow, let alone, well, this 

Divia Eden: The original event horizon, the singularity.

Perry Metzger: Yeah. 

Divia Eden: The thing you're saying about harder and harder to predict tomorrow - 

Perry Metzger: I mean, [01:28:00] if 1900 did not look as different from 1800 as 2000 did from 1900 as 2020, you know, looks in certain ways from say 1980 and things are speeding up quite a bit and the future shock problem, we are going to hit all sorts of very abrupt breaks.

Right now there are all sorts of people in various creative fields who are suddenly coming to grips with the fact that generative AI is going to be a big part of their industries. I think if you're an artist right now, you should be welcoming this.

Divia Eden: I'm guessing you partly think that because you think it's a good thing and you partly think that because I believe last I checked you’re stoic.

Pretty Metzger: I am

Divia Eden: And so I think you would also say, well, it's, they can't control it.

Perry Metzger: So it's, but that's not the point. I mean, you think that's the way that they've welcomed Photoshop and, [01:29:00] you know, and pen tablets and all of the rest of this stuff. 

You know if you're a commercial artist, you are, a fine artist, you're a cartoonist, you're a book illustrator.

These tools can relieve you of enormous amounts of day-to-day trouble. There isn't an obvious saturation in the market already for art of this sort. You can increase your productivity dramatically, which means that although right now you're a well-educated person earning a very low income, you double, triple, quadruple your productivity and suddenly you might not capture 100% of that productivity improvement, but you're gonna recover a bunch of it.

Divia Eden: You're suddenly going in a pretty straightforward economic sense. You expect the artists, that the artists can embrace these tools and the ones who do will, for example, make more money.

Perry Metzger: I think that all of them could embrace them. I was having a discussion recently with a [01:30:00] cartoonist who is an acquaintance of mine, and, you know, we were discussing, well, what would happen if suddenly, you know, there was eight times more cartooning done in the United States.

Well, you look at the manga market in Japan we're nowhere near saturation in the us. 

Ben Goldhaber: yeah, no, you did say a number of breaks are coming up for society. But so this isn't one of them then. 

Perry Metzger: This is like because I think that a lot of people are simply going to fight it instead of embracing it, they're disgusted by it.

It upsets them. 

Ben Goldhaber: So the breaks are the conflicts within society when this is -  

Divia Eden: and the culture isn't ready. 

Perry Metzger: A sufficiently flexible person can accept a lot of things. You know, I mean, let's say that at some point in the next, you know, in the future evolution of our society, we end up with John [01:31:00] Valey esque body swapping, where you can wake up one morning and decide I'd like to be the opposite gender this afternoon.

And it's not some sort of, you know, not particularly great surgical job. It's perfect. Right. This is a thing that is logically possible and whether it's probable or not, it's a technology that could be built. Right. It is absolutely the case that we could build something where you sit down in front of your television set and say I would like a romantic comedy.

I'd like it to star Humphrey Bogart and Gilda Radner to pick like a completely weird and incompatible pair of people. I'd like it to run for about 87 minutes before I've gotta leave to pick up the kids. So, I'd like it to run that long.

And you know, it can have like kind of an exciting soundtrack and it'll start and this doesn't seem that far off and it'll start playing. I [01:32:00] would've said, you know, last year, and it'll start playing. Right, right. And it'll be good. It potentially might be really good. What does that do to Hollywood?

Divia Eden: Well, I mean, it certainly means, as you point out, they could embrace these tools and maybe the market hasn't anywhere near hit saturation. It certainly changes it a lot. 

Perry Metzger: Well, so, the thing is, I think that that's one of the most quotidian and stupid possible uses of the technology, and yet you can already see how that rips, you know, A hole in the expectations of lots and lots of, we have been up until now in a world in which good art is scarce, and we're going to be, I, 

Divia Eden: I actually, something I've thought about here is that there, I'm excited for a part of this because I, 

Perry Metzger: I am completely excited for all of it. This is wonderful. 

Divia Eden: In particular, there's a certain sort of, I don't know, coherence that I [01:33:00] often find in, for example, novels. I think I've basically never seen a TV show with that. I think because there's, even the very best ones have a certain amount of sort of design by committee associated with them.

Perry Metzger: And, it's hard when something is that big and expensive to have the sort of coherence you'd like. Right. 

Divia Eden: And so, if with these sort of tools –

Perry Metzger: By the way, you could have greater coherence than novelists are capable of. There can be no continuity errors.

No problem. 

Divia Eden: Okay. But so can I actually, I'm gonna go back around to the goal systems thing, if you don't mind too much. 

Perry Metzger: Sure. But, but, but before we go on, I just wanted to say that like I should, I'm not trying to say like a lot of this is a net negative. I think that we are in for the best period to date in our civilization, and there are, are lots and lots of dangers, lots of horrible dangers.

But we are in for potentially like a, very great, you know, like an age of wonder [01:34:00] and wealth like we have never seen before. 

Right. You know, I always mention to people, you know, maybe I have a fixation with this, the first World War, but you know, just before the first World War, you know, Vienna was seen as this amazing capital of culture and wealth and art and all of this other stuff in Europe.

And if you look at what incomes were like in Vienna, at that time, Vienna was poorer than any place in India today. And people starved to death on a regular basis. Yeah. Right. And they lived in filth and many of them didn't have indoor plumbing. And people could afford if they were lucky, like, you know, one set of clothing.

And it was a totally, totally impoverished world. But of course, since we see it through the, you know, glasses of things like BBC mini-series we don't see all of that. You know, [01:35:00] even the best historical dramas don't really convey all of the filth and odor.

Right? That's the fact that no one has antiperspirants. No one has indoor plumbing. No one has a decent bathroom and no one has more than one set of clothing. Just doesn't come through. You know, there maybe with smell vision someday. It is the, what the future in the near, in the near future, we are going to look back on all of us here and think how horribly poor these people were.

Same how horrible their medical treatment was. Same you looking back on pre-World War I, how horrible their lives were, their lives were short. If they got all these vast number of diseases, they couldn't do anything reasonable about it. They suffered from viruses, they suffered from bacterial infections.

They got cancers, the, the crystalline lens intheir eyes, you know, stiffened when they got old and they couldn't do a bloody thing about it except for inserting a piece of plastic in its place. How barbaric and [01:36:00] crude. Now the negative here is, and by the way, and we may go well beyond that, you know, and we all, we all might end up as, as you know, vastly much, you know, more intelligent uploaded things that started as people.

The negative is that there is a lot of existential risk out there, but the existential risk is not new.

I think that we have been living in some sense un borrowed time since the second World War because of nukes. You mean because of nukes? Because of the other technological discoveries. We now have biotechnology that's more than capable of doing truly horrible things. And, and the average graduate student could probably do a large fraction of them.

Things are, we have had horrible existential risk for a while. The only way to the other side of the existential risk problem is through. And, and the longer we delay a bunch of this stuff, the longer we live with that existential risk. Can 

Ben Goldhaber: Can you say more about this [01:37:00] because I was curious with your kind of libertarian and ancap perspective and this acknowledgement or appreciation for the risk of some versions of this technological progress.

Like what does going through this period look like to you? And how does governance factor in? If you have any kinda like near cast scenario. 

Perry Metzger: Literally the end of the risk period is when the Von Neumann probes carrying fragments of our civilization are going out from our solar system at near the speed of light to nearby solar systems.

Because up until that point, there are conceivable ways that we could try to wipe ourselves out. And at that point it becomes physically difficult. You know, the, the other end of this is, is even, you know, is getting to be a kardashev type two civilization, getting past that. And the sooner we get there, do 

Divia Eden: You wanna explain what that means [01:38:00] for our listeners?

Perry Metzger: So there's the kardashev scale. It is a way of measuring civilizations. And as with certain other things that have entered the folklore. Like people think of Moore's Law as meaning, you know, one thing and it means another. You know, it, it only, it doesn't refer to things getting faster.

It just referred to the number of transistors in a maximum size ship. But nevermind that the Kardashev She scale literally talks about what fraction of the energy resources of a thing does your civilization have access to? And a kardish, she type one civilization, which we're not quite at yet, has access to all of the energy resources of its planet.

This is sort of a logarithmic scale, so, you know, some people have extended it and say, we're like a Kardeshev 0.9 at this point. You know a Kishev type two civilization has access to all of the energy resources of its solar system. So presumably it's capable of building something like a Dyson sphere or, or more likely a Dyson [01:39:00] swarm, a Montreal Swarm or what have you.

And a type three a kardish show, types three civilization has a, you know, controls its whole galaxy. Okay. And this is kind of informal, but but, but you know, at the point at which, you know, we've turned the solar system to computronium, we have our probes going out at, at, at, you know, at like 0.95 C and.

we're, you know, and we're in a position where we're unlikely to kill ourselves. Then we've got some safety between here and there. There are all sorts of horrible disaster scenarios. And the disaster scenarios only stop. I, it, it's not like we're safe standing still. Right? Yeah. Like, yeah. So is 

Ben Goldhaber: it, is it fair to say one way to take your point of view on this is like, we need to move fast through this period of existential risk until we can get to a point where we are not actually like standing on unsafe ground.

Perry Metzger:. So I wouldn't call myself an [01:40:00] accelerationist, which is a term that post dates the extropians. Yeah, yeah. You know, it’s weird by the way, like, you know, thinking that, that I was around all of these people who, who like exchanged and, and traded and transmitted out all of these transhumanist memes over a long period of time, or rather over a brief period of time.

But, you know, that was sort of like ground zero for a lot of this thinking. But the Accelerationist view that you see a lot out there seems to be that, that there's a lot of, make it go faster. Make it go faster, get rid of the the suffering.

Get us to the point where we have the technologies to really conquer the solar system, conquer the universe get us to the point where, you know, we can get past the existential risk. And I'm very sympathetic to that viewpoint. I wouldn't call myself 100% a convert. But I do very much feel like [01:41:00] we are not, we have not been safe for quite a while.

Divia Eden: And would you have, we're trying to quantify that. Like I know it's hard to say these things in retrospect, but if you had to say like, post World War ii, I 

Perry Metzger: think I, I like 

Divia Eden: what percent per year do you think the risk was in some sense? 

Perry Metzger: I don't know. I mean, you know, I find it kind of remarkable that in spite of the people like Curtis LeMay, that we managed to survive, right?

Divia Eden: I don't know that I wanna go down this rabbit hole, but there's always well anthropics.

Perry Metzger: So you know, on even number days. I believe the only reason we're here is the anthropic principle and on odd number days, I you know, I think that many worlds doesn't mean that, you know, and, and on leap days you know, and special holidays.

I, take some sort of perverse other position. 

Divia Eden: but do you think it's more like, I don't know, 1% a year, more like [01:42:00] 0.2% a year? More like -  

Perry Metzger: I think it's more like even more like 1% a year.

Divia Eden: Okay. So, cuz cause I think when I hear you saying, okay, but we've had existential risk and it, we will continue to have it until we get to the point that you described with the, you know, near light speed probes.

But I mean the, the argument among people who are concerned about it is, well, yeah, but, well they're not just talking about 1% a year, they're saying it'll be a lot more. 

Perry Metzger: What do you think about, so there are, there are two layers here. One of them is what do we do about it? , I don't see that we can slow this down at this point.

Right. We attempts to slow it down and, and, and I've, I've seen people online saying, no, no, no, no, no, no. ASML's equipment. You know, ASML's the only place that can make deep uv, pardon me, extreme uv. You know, right. That there could be a hardware bottleneck. All we have to do is a few of the, you know, is, is stop them and you know, and don't worry, the Chinese won't be able to Yeah.

The Chinese have already stolen all [01:43:00] of the plants, and I bet you, given the fact that we're denying them e u v equipment, that they'll, there'll, they'll be a couple of years behind it. Most a lot of people have an exit. So there's a lot of this stuff is not easy. Like it's real, real, real, real hard. But you mean the hardest stuff one human being can do, A determined group of other human beings can do.

How, how many times have we had true failures to build nuclear weapons among countries that have made a serious attempt? I mean, it's, it, it it has not happened particularly often. Right. Like when, if they, if they've actually gotten to the point Yeah, but I, 

Divia Eden: they, but I mean, it, it, you could slow people down with that sort of stuff, right?

It's building the nuke Cause among the people that want to Doesn't 

Perry Metzger: always happen. You can, you can slow people down. You can, you can, you know, you can 

Ben Goldhaber: go back to the FDA example. I mean, yeah, China's not really followed in their 

Perry Metzger: but down, but they absolutely see the, you know, the United States's desires to cut them off from as soon as we decide to cut them off from some technology, it becomes a major point for them to [01:44:00] get it.

Okay. So 

Divia Eden: that's a, that's one dynamic that you think could end up being 

Perry Metzger: counterproductive. We're, we're not going to stop the Chinese from doing, from getting their own equipment, from building their own ships. We are not going to stop other countries from doing it. We will not stop research outside of the United States.

We will not the, by the way, I, I know lots of people who are alike, but all of this require is extremely expensive equipment and all. No, it's not, it's not going to require it forever. People. You're 

Divia Eden: saying that with the algorithmic progress, then it, people 

Perry Metzger: are working fancy people real, real hard on cutting the costs and you know, unless you really, really want to, to, I mean, un unless you want to bomb the world into the stone age, in which case when it recovers, you know, you'll just end up with the same stuff.

Except people will go faster because they have access to all the sa the information that we gathered before. I, I don't see how we're slowing down any of this. What we could succeed in doing, however, is we could succeed in a situation creating a [01:45:00] situation in which the only PE in which the people at the cutting age of AI research are foreign militaries and, and things like that.

And I think that there's actual, you know, you, you, you hear someone like Elier talk about it and you know, he is like, well, you know, you, you shouldn't think about this in terms of, you know, the Chinese government gets some super powerful AI and asks it to get rid of the rest of the world. But that is a scenario, right?

And, and, and it's a scenario that worries me more to some extent than the alien mind scenario because that's a, I do 

Divia Eden: wanna get back to it at some point, but, but yeah, you're saying that you are also worried about maybe this is legitimate, about the more prosaic, 

Perry Metzger: this is legitimately dangerous stuff and so is CRISPR has nine and so is nuclear power, and so is I, by the way, I don't want, that sounds stupid.

Okay. I I, I should be listening to myself. I would [01:46:00] be, I would immediately reply to that tweet saying something like, but the AI stuff has mu, you know, has the capacity to do things at higher speed, harder, et cetera. And yeah, that's true. It, it is. But fundamentally, it is a two-sided technology. It has enormous benefits.

It has enormous risks. Denying ourselves the benefits is stupid. There are a, we are not going to slow down the research. We are not going to stop the research. We are not going to scratch the research right now, right now, like everyone is, is there's a gold rush happening right now among the VCs for this stuff.

And unlike I, I 

Divia Eden: wanna explore. So, but why do you think, can you sort of point to the part in your model that says we can't slow down AI research and the F d A has slowed down medical research, which I think is sort of what Ben was 

Perry Metzger: saying, but I So the thing, the thing is that the F FDA has had the excuse all along of the thalidomide children and you [01:47:00] know, and, and you know, the, the myths of, of like, you know, half of the country getting poisoned by, you know, by bad drugs before 1904 and what have you.

And, and, and that happened every once in a while. And it, you know, and it turns out that it still happens every once in a while. 

Divia Eden: So you're saying the difference is that there was sort of a prior crisis? The actual 

Perry Metzger: example is, the difference is that moved at a reasonable speed that, and had lots and lots of leverage points and has this notion that human trials are special and a and, and immoral, if not conducted according to exactly the correct mechanisms.

And, and all this other stuff. And it, it's hard to, you know, it, what we're dealing with here is a lot harder. I mean, there's been, there's been very, very like harder to regulate. You mean it's harder to regulate, it's harder to keep up with. You know, there, there has been an incredibly strong desire, I [01:48:00] think among certain portions of the regulators to crush cryptocurrencies.

And they are even now, let that, you know, that I, I think there's, it, it seems like what happened to Signature, for example, was not so much an accident as foul play and what happened to Silver Gate. Right. Even 

Ben Goldhaber: now to elaborate on that, those are two banks that were banking a lot of the crypto industry and both have been shut down in the wake of the recent credit 

Perry Metzger: crisis.

Yeah. And, and I think that led that even. So you're 

Divia Eden: using crypto as an example of something that the government does sort of want to move against and it hasn't really been able to, and you think AI will sort of, you know, I think the AI's worse. Yeah. In the way, in worse in the sense of it's less like medicine and it's more like, it's like crypto, but more so, and the government will not, it's like 

Perry Metzger: putting to use crypto.

It's like, it's like the, it's like the website explosion. It's like it's like all sorts of thing is, it's so decentralized, right? [01:49:00] Everyone, everyone now knows how to do this stuff. Okay. There are a bunch of, of, of like extreme tricks here, but once you learn about them, if you are a smart person, you can reproduce this research.

Some arguments to the effect of, well open AI has this gigantic labeled set of images to train against and it's, you know, and, and it costs a lot to put a lot, like some of these things are self enabling at this point. Do you guys know about the mo about the pseudo you know, reinforcement learning with human feedback stuff that just came out this last weekend against the llama small model?

Probably not. No. Do you wanna say more? Okay. So, so these th this team at Stanford did something truly brilliant, which was, they used Chachi p t to generate tens of thousands of examples with which to train an open weights model that has gotten [01:50:00] leaked and also given out by meta. Is that the Facebook on?

Yeah. . Okay. I mean, I, I, I think they, if they released it to researchers, 

Divia Eden: I believe, and then it was on within 

Perry Metzger: a week. Yeah. Pardon? Yeah. But I mean, everyone, everyone, and, and their uncle has been experimenting with this thing. But anyway, they, someone wanted to retrain this thing with R L H F in order to make it much more like chat G P t, because like you'll might remember the G P T three, it wasn't really good at conversation.

Yes, it was. It was a text completion thing. You know, you would have to say the following is a short st not write me a short story. You, you know and, and, and, and someone figured out, well, you know, instead of being around at, at open AI and spending vast amounts of money, having human beings put together these, these reinforcement data sets, I can have one of the AI generated and, and, and people have [01:51:00] gotten these ideas now.

Right. By the way, I'll mention this is 

Divia Eden: one example of many of how 

Perry Metzger: things are gonna get cheaper. There's so many examples I have already. So I, yeah. And, and, and by the way, I'm, I am, I am not joking. This is not just a playboy. I read it for the articles thing, which by the way, none of your audience, if they're under the age eighties is going know that joke anyway.

But I have been following the underground AI porn generation community very strongly, and I've been following them in order to get a sense of what happens when people are very motivated to build this stuff and are not inside the mainstream. Yeah. And the answer is that people are real good at it.

I mean, the, the about how far 

Ben Goldhaber: behind you would you say they are relative to like a DALL-E model or something like that? In 

Perry Metzger: quality Ahead. Go. Oh yeah, yeah, yeah. I mean, at this point, you know, they, so they're not, they're, they're not generating full motion video or anything like that. [01:52:00] But - 

Divia Eden: How far off do you think that is?

Perry Metzger: I am so hesitant to say the first systems, like the first research systems that do some of that stuff already exist. The first systems to generate really good human voices exist. Now are, are you guys familiar with the 11 lab stuff? I'm not. Are you? A little bit the voice cloning you, it not only will clone your voice, but you hand it a text and it has trained enough on what, how human beings read a text and how they put emphasis and emotion on various places that it gets the emphasis and emotion.

Correct. I see. Okay. So you can hand the thing, the text of say Moby Dick and it will, and tell it, you know, and give it, you know, say DIA's voice and you know, and it'll, and, and it will generate something with Divya, you know, reading the, reading the audio book of Moby Dick. And it sounds good. It's not perfect.

Interesting. But it's, but it's, [01:53:00] it's so close. It is so very close. And you combine that in a bunch of the image generation stuff. Then how far are we from the movie scenario? I had, I don't know, but closer and closer and closer. And one of the big breakthroughs right now is like that G P T four is going to, you know, the, the eight kilo token model I think is accessible now.

Yes. Much. But they, but they have, they, they have a 30, I think a 32 K, that's what I remember, token model. And that's large enough that like short stories, novellas, you know, those are within access or videos. They're not that long. Writing the script for a video isn't that long. And then you have a system that has some memory.

Yeah, maybe you make use of stuff going control net. And all, by the way, all of this stuff happened because of stable diffusion became public right. Control net was created because of that. Large amounts of this other research has only been possible because this stuff has been leaking around. But, but to, to get back to the point, the people who are working on making these things run on [01:54:00] things that are more consumerish in terms of the hardware, finding ways to do training on lower budgets, that still is good.

You know, people who are v there are people who are very, very motivated out there, and it's, it's gonna be bloody hard to, to put the genie back in the bottle. Everyone knows how this stuff works now. I mean, you know, gradient descent is, is a cool idea. You know, relu and, and some things like that are, are cool ideas.

Transformers are cool ideas and yeah, the people at the cutting edge know more than the people behind it. But, you know, it, this is the, the other thing is like nuclear weapons. You needed to get your hands on, you know, on a gigantic machine to centrifuge all of this, you know, uranium, hexa fluoride, and you weren't gonna do that in your backyard.

But I have friends, you know, buying, you know, 40 nineties from Nvidia and going to town, and they're having a [01:55:00] great deal of fun. It's, it's, it's out there. It's everywhere. They're not, you're not gonna tell, get people to forget how to do this stuff. You know, college students know most of this technology now.

They're not at the cutting edge. They can't do the whole thing alone. But it's, it's getting closer and closer and people are leveraging the tools that already exist to build other tools. People are leveraging the ais to train and build other ais.

Ben Goldhaber: So I wanna make sure that we get a little bit of time. Speaking of disruptive technologies, I really wanted to hear more about some of the nanotechnology topics. I know you're an expert in if you're right to pivot 

Perry Metzger: to that briefly. I'm a fake expert. I mean, fake expert to, 

Divia Eden: compared to most people at least I talked to.

You've done a 

Perry Metzger: much deeper dive. Well, I made, I actually decided that I was going to get a formal background in chemistry and in physics so that I would understand the stuff down to the metal. And I have done, I have published no research papers. I merely understand other people's work, but I [01:56:00] actually understand it which a lot of people don't.

Yeah. You know, I can read Nanosystems and Well, 

Divia Eden: I've seen you quoting specific people will say things about nanotechnology on Twitter and you'll say, well, this is addressed in this chapter of Nanosystems. 

Yeah, 

Perry Metzger: yeah. No, I mean, every time anyone criticizes Drexler, they haven't read him or they don't remember him.

Because he actually 

Divia Eden: did manage to anticipate, sort of, according to you, at least, manage, to anticipate all the obvious criticisms. And he addressed them in nanosystems and people 

Perry Metzger: aren't, he anticip anticipated all the obvious criticisms, and he got almost all of them. He got a remarkably large fraction of them the first time around.

He, he, he, I, I hesitate to call anyone a historically significant genius, but Eric Drexler is like, he is up there. I have insane respect for what he managed to do. Like the man started with nothing and ended up with a PhD thesis. [01:57:00] That is one of the most groundbreaking pieces of writing I've ever seen. And people don't, generally speaking, for their PhD thesis, write something particularly interesting.

There. There are right , there are exceptions. Like l like, like Lou Deli got a Nobel Prize for his doctoral dissertation. . It is rare that that happens. Usually your doctoral dissertation is one of the most boring and useless pieces of work you ever do, and you hope no one ever reads it. Eric Drexler is, is is kind of astonishing.

And there are people out there who repeatedly say things like, well, this couldn't work and that couldn't work and this couldn't work. And you, you try pointing them at the book and you say, okay, you say that positional uncertainty from, you know, from thermal noise is going to make all of this impossible.

So in addition to the fact that you exist in spite of the fact that there's, but 

Divia Eden: you're saying meaning that we already have [01:58:00] biological 

Perry Metzger: nano, but, but let's ignore that. Let's pretend we didn't know that Eric actually goes through a first principles analysis using the basic physics in chapter five of Nanosystems and goes through this in grotesque detail.

He also goes through the question of whether quantum uncertainty is a problem. He also goes through the problem of whether error rates make this impossible to deal with. What sort of repair rates you need, what sort of, you know, what sorts of things you can and can't probably managed to constru. And he has gone through this in ridiculous detail.

I, in grotesque, astonishing, overwhelming detail. You read those, that book if as somewhat, and it takes, by the way, it takes an incredible background to read that book, the ordinary, synthetic, organic chemists in, you know, the grad students that, that, you know, I, I worked with because I, I decided to work in a wet lab for a while [01:59:00] because I wanted to actually know what synthetic organic chemists know and what it's like doing synthetic organic chemistry these days.

You know, I I, I spent a long time, I spent years of my life. Learning enough that I could read nano systems in detail. It's my suspicion that a large fraction of the people, even in chemistry who, who read that book, don't understand enough to get all of it. 

Ben Goldhaber: But, and is that where you think I mean, a, a do you think that there is generally been like slow or no progress in 

Perry Metzger: Nano?

There's been no effective progress for a very long time. I mean, there, there are papers regularly still published by a handful of people who are real experts. There is a lot of stuff that, you know, Ralph Merkel has published over the years. Mm-hmm. Ralph Fridays you know dammit he's at Syracuse and he's a friend of mine and I should remember his name.

But I'm an old man and I, Damien Alice, that's his name, there are a bunch of people out there who, who, who do good [02:00:00] work. But it's, it's, it's, it's small enough that like a minivan going to dinner at the wrong conference could kill the entire field. Right. And 

Divia Eden: why do you think it is that progress has been so slow and because my sense is that you think the technical barriers are not insurmountable.

Perry Metzger: They're not insurmountable, they're expensive. So I can give a few a few ways of describing this. First of all in the, you know, in the first half of the 19th century, Charles Babbage, you know, figured out that computers might be a thing and started designing things that would have been buildable with the technology of his time.

And he also turned out to be, you know, kind of obnoxious and probably aspy and not very good at dealing with a lot of stuff and. , you know, pathological hatred of, of organ grinders. I, I'm not joking. You know, like all sorts of, you know, that one Okay. All sorts of weird quirks of his, his his autobiography is online, like, you know, the P d F of it.[02:01:00] 

And it is an incredible read. Like note of that, he's, he's, he's, he's an in, he's a really interesting character. And all of the things that he dreamed of didn't show up for a hundred years. And so you think it's like that with Nanosystem basically? Well, sort of, yeah. And, and if you look at, for example damnit, I'm, I'm having another senior moment Constantine Cky til Ksky.

Oh, he's the space guy. Yeah. Here is this guy, this, this crazy Russian school teacher who develops most of the physics and a lot of the, and a lot of the chemistry associated with rocket science. Mm-hmm. on his own with no funding, publishing hundreds of papers on it, you know, in the early, early part of the 20th century, late, very, you know, last years of the 19th century, early part of the 20th century, decades before anyone builds any of this stuff, no expectation in his mind that anyone will ever build any of his dreams.

And he does things like figuring out that liquid [02:02:00] oxygen, liquid hydrogen engines have the highest specific impulse. He invents staged rockets, he inv, you know, he figures out a lot of like the ideas behind life support systems. He invents the rocket equation. You know, he figured out like all of this stuff and no one did anything.

And, you know, and it was nine in the 1950s, late 1950s before anyone actually built an orbital rocket. Okay. You know, decades. Yeah. So you're 

Divia Eden: giving a couple of examples of where. The fact that nobody built, it was not at all an indictment of the plans that people 

Perry Metzger: had laid out. No. I mean, there's a great quote in a in a Carl Sagan book, you know, and, and, and that you should be a warning, right?

You know, they laughed at Fulton. They laughed at, I don't remember who, but they also laughed at Bozo the Clown. So the fact that this has happened in the past is not in and of itself a a reason that you should believe that Drexler must be right. Sure. I, I encourage people to read his papers. [02:03:00] It is unfortunate.

So why do I think there hasn't been much progress? So, a few reasons. First of all, Eric, I think, is a crazy optimist about how easy it is to understand this stuff. Okay. If you read the introduction of Nanosystems, he speaks about how he's tried to simplify the material for a more general audience and, and, and how, you know, he ex, you know, he tries to make it possible for, you know, for experts in the following, you know, in chemistry and physics and other things, you know, computer science to be able to read, read this.

Do you think that almost nobody can understand his work? It requires a deep understa, you know, like every other page, you know, and here he references sn two orga, you know, reactions, and here he's referencing, you know the Bourne Oppenheimer approximation in, in, of you know, for doing numerical quantum mechanics.

And here he is ing, you know, and every page or two, he's, yeah, every page practically is dripping with an incredible panoply [02:04:00] of, of, of, of really, you know, of, of complicated ideas that even most people in a specific niche in science don't get exposed to. So it's a real hard read. There, there aren't, there aren't a lot of people who are, could do the research or are willing to do the research.

There, there's, there's an amazing. Essay by WW Hamming called you and your research? Yeah, 

Divia Eden: I, I think Ben and I, great essay. I'm guessing some of our listeners know this one too, but 

Perry Metzger: feel free to describe it and, and, and, and I'm gonna, the links grotesquely oversimplify it. And, and note say that that hamming notes at one point, that if you ask the average researcher what the really important problems in their, in their field are, they can tell you.

And then you ask them, are you working on that? And they'll say, oh, no. Right. You know, I mean, it, this is, this is one of the, I mean, I'd say that of the technologies we lack right now, the two most transformative technologies are molecular manufacturing and ai. [02:05:00] And yeah, the AI stuff didn't have a lot of people for a long time either.

There was the whole AI winter, but it slowly, it slowly started building commercial successes. Right. I mean, I think most people are unaware of the fact that the US Postal Service has had machines, reading envelopes, you know, the addresses on, didn't know that. Yeah. Far longer than you would think, right?

Yeah. They, they had competitions, you know, in the early nineties for, you know, for, for, for replacing the human sorters with O C R. And they've almost completely succeeded at this point. There's a handful of envelopes that, that can't be deciphered, that get sent to, like, I think they have now one human o you know, sorting office left.

And it's, and the things that are left for the humans are very hard for the humans to decipher. And often they. . Right? The machines. The machines do an incredible job. And so there were all of these successes that people were developing, you know, voice recognition systems. We we're so used to voice recognition being a thing.

I remember 

Divia Eden: one that wasn't 

Perry Metzger: a thing at [02:06:00] all. Yeah, yeah. But it's been a thing for in, for a, for a ridiculous amount of time at this point. Primitive visions systems have been a thing for robotics for a while. You know, I mean, people were working, were putting money into it for practical research. So you're saying that 

Divia Eden: with ai, unlike with the nanotech, with ai, there have 

Perry Metzger: been commercial feedback, there have been incremental commercial successes that have fueled interest and people at places like Meta, you know, it's been, it's been like well over a decade, I think it's been substantially longer than that.

Now, that Facebook will, a will tell you, say, is this a picture of Divya? You know, right. You know, they do pretty well. Is this, is this a picture of this person, you know, and they, these systems are, are not that new at this point. There's been a lot of commercial pressure on them and, you know, and the people.

And now the cutting edge research is being done on crazy specialized equipment that people have built for the purposes. I mean, Sarah Bros makes [02:07:00] some of the weirdest, craziest computer hardware in existence. The, they make three, you know, they make foot wide. Chips, single chips that are a foot on a side, 300 millimeters on a side, and guess a little bit more than a foot with trillions of transistors and tens of thousands of processing units on them that burn 20 kilowatts of electricity.

And the guys that open ai, I believe, eat these things for breakfast. Like they're like candy around there. You know, that, that, and, and because of that, they're making all of these incredible strides. By the way, this sounds like I'm saying that, that you know, that, that that means that normal people can't do the work except the, the stuff you can buy to do gaming or, or stuff like that at home is just crazy as well, like 40 nineties and stuff.

Ben Goldhaber: I'm curious, do you think that the, some of the advances in AI will spill over or rather maybe like unlock different advances in nanotechnology, 

Perry Metzger: maybe make that easier? Well, so there are, well there are some side effects already, right? For not in, okay, let's not look at [02:08:00] nanotechnology for a moment, but the protein folding problem alpha folds Yeah.

Is a thing that was conquered by ai and, and it's not real. It, it, it's, it's like a side effect of ai. One of the things that people figured out is that these gigantic, you know, is that these gigantic gradient dissent systems to generate these, this big, you know, these big matrices with, you know, a little nonlinearity tacked on the under the side are ways of producing approximations of almost any of any function that you can think of that's reasonably behaved.

And things like me turning protein structures into folded proteins, that's a weird sort of function you can think of. Being able to figure out the behavior of complicated molecules that you might want to use to use in nanotechnology circumstances. This is probably something you can do with use. You can, I think it's on the horizon if it's, I think that that is a thing that, [02:09:00] an application of AI to nano, I think that.

Building better controls for scanning. Probe microscopes is a thing that already there are companies using AI technology for neotronics companies like that. There are lots and lots of va of side effects here, but the, but the biggest issue has been people, it's very hard to do the work on nanotechnology.

P a lot of people had strong incentives to claim it wasn't possible. It's very difficult for laypeople to decide whether this is crackpot or not. I mean, it sounds completely crackpot, right? Even the stupidest possible applications, like, you know, you could build an aircraft where with, you know, with diamond or di you know, or, or diamond composite spars in it, that, that weighed like 1% of the weight of a current airplane, but was just as strong.

And this, this is transformative and it's also stupid, right? This is, this is the, this is the equivalent of this is like the least fu shock type version. This is, this is [02:10:00] the equivalent of, oh, I could put a motor onto my, onto my horse drawn carriage. Yeah. Right. You know, make it easier for the horse. Right.

You know, it's, it's, it's, it's, it's, it's not quite thinking along the right lines, I mean the right lines are things like, like Josh Hall's you know, utility fog, which really sounds like magic, right? Like utility fog as described is the closest thing to magic that human beings. I don't know what you, sorry.

I don't know what utility fuck is. Can you tell us? So the idea is that you build these extremely small machines that are capable of reaching out and hooking themselves to other neighboring really, really tiny machines. And they can kind of float around in a way, almost nothing, and are extremely strong.

And these can reform themselves into anything. So like, to give the stupid example, you can walk into a room and have the chairs transformed into a, so, Or have, or have your house transformed into a different [02:11:00] house or I mean this, right? So I, 

Divia Eden: I think maybe part of what you're saying here is that with nanotech, the, actually the actual upside here is something that people can't really relate to, people can't really comprehend.

It seems crazy. It's major. Yeah, it seems crazy. And so you think that's been a major barrier to actual 

Perry Metzger: people pursuing it? I think that it seems crazy. There have been a, only a handful of people who really have understood it and been c and, and in a smaller number have felt committed to do work on it full-time.

Eric tried to get a bunch of funding for it. This reinforces certain of my prejudices against state programs. The National Nanotechnology Initiative got, you know, like I think a half billion do initial funding to pursue nanotechnology. And the synthetic organic chemists immediately knifed him in the back, destroyed his public reputation.

And a all with garbage, right? Like the Smalley Drexler debate. All of Somali's arguments are poop. I mean, I haven't read it, but I, I don't care that had a [02:12:00] Nobel Prize in chemistry. He, he did not under, you know, to the extent that he understood it, he was disingenuous. And to the extent that he didn't understand it, you know, he, he, you know, he, he didn't care.

The, the, you know, all of the arguments that he made were already disproven in Eric's papers or were already addressed in Eric's papers. All the substantive arguments And, and we have a good deal of evidence from people doing work like, like using, like taking scanning, probe microscopes, abstracting individual carbon monoxide taking carbon monoxide molecules on a s on, on a passivated surface at a very low temperature and like picking them up and then getting those carbon monoxide molecules to react with other molecules on the surface.

And people have done this stuff. It sounds like it's picking 

Divia Eden: up, like they built a little, 

Perry Metzger: how, how did they pick them up? Well, using a, okay, so scanning probe microscopy sounds like magic, but it could [02:13:00] have been built in the 1950s. So when, when I was a little kid all of my teachers told me no one has.

And now Adams are very, very small and no one has ever seen an atom and no one ever will. And by the time I was in my late thirties and taking, you know, a physical chemistry lab class not only had people seen Adams, but one of our, our labs was here you know, make a little scanning probe micro atomic force microscope tip by breaking a little piece of metal wire and mount it in this system correctly.

And take a piece of graphite and like, use a piece of scotch tape to get a single monolayer off of it and put it into the A F M. And now use a tapping af fm to see the graphite, the graphine sheet in, in your device. And like an, like an undergrad could do that. An undergrad can u operate in the sheet.

Let's see it right. They can see individual atoms. How does this work? This [02:14:00] works with, by having a very, very clever lever mechanism in now normally in, in, in, in which you mo, in which you move a, a piso electric crystal a relatively large amount. And it moves the tip of a needle, a really, really tiny amount.

and, and, and in, in scanning tunneling electron microscopy, you move a needle tip with which, which you have broken. So that there's only probably a single atom at the tip. Okay. Over a surface scanning back and forth like a television. You know, and, and, and sh you have it. And, and you, you, you, you have electrons jump from this tip into the material underneath by, you know, by, by charging the thing appropriately and you measure the current, and from this you generate an image of the surface that you're looking at.

You can move the needle tip much, much less than an atom width. That is one of the miraculous things. Yeah. And it, as I said, it's technology that people could have built in the [02:15:00] 1950s, but no one thought to do it. So I don't give, anyway, so 

Divia Eden: this is, thank you for indulging my curiosity about how you get this Adam, 

Perry Metzger: I

Yeah. And, and, and then there's atomic force microscopy where instead of me, instead of sending using an electron beam emerging from the tip, what you do is you feel the forces between the tip of the, of, of the probe and the surface underneath it. And I, I'm gonna, trying to bring 

Divia Eden: this back to ai. So, I mean, cuz this is one of the things that, that comes up about AI systems, is that a certain point?

They may develop 

Perry Metzger: nanotechnology. Yes. One of Leers. So I don't know that Eiser got this straight from me, or maybe he did, but I noted very early on in the extropians mailing list that AI and nanotechnology are kind of enabling for each other. If you have good enough ai, you can use it to produce nanotechnology.

And if you have good enough nanotechnology, you can use it to enable ai. 

Divia Eden: Yeah. So can we talk about the AI to nanotech? Sure. Part of this and, and 

Perry Metzger: what you think, I mean, well, presumably, one, one of the problems we have is [02:16:00] that, you know, we have a few dozen people who understand the field and, you know, a handful who are actually working on it in any given time.

What if I could spin up smart engineers in a w s that will, you know, and I need, I want 15,000 engineers working on something. Well, that's a matter of money. You know, I don't have to recruit them, I just, I, I just turned them on. Yeah. This improves the speed at which you can design or build anything.

Right? Eric has, 

Divia Eden: so, and, and you're not so much thinking, well, the AI will sort of decide of its own volition at some point that it needs to figure this out, but you are thinking, 

Perry Metzger: well, well, maybe, maybe one might, but, but the, but you don't need to go to that in order to note why nanotechnology could come faster, because of ai.

Right. Every conceivable technology could create AI researchers. Once you have ai, every conceivable technology is much more accessible because the main impediment to creating almost any technology you can name is building, [02:17:00] is having enough minds to work on it. I, I tweeted about this a few days ago, that you know that the biggest impediment to progress in our civilization since our emergence has been the posity of mines that are available to work on any given technical problem we have.

And once you have ai, that problem disappears. You are in a position where you can build, construct as many mines as you can afford to work on a problem. So if you need a team of 5,000 engineers working on the problem, you can have 5,000 engineers. You don't even have to recruit them and convince them that it's a good.

Or at least not necessarily. I mean, maybe, maybe in order to have engineers these things, you know, end up being willful enough that, you know, you have to promise them enough electronic porn and enough days off and, you know, and enough money in their bank account. I don't think that's going to be the case, but you could imagine it.

But, but almost certainly what you end up with is a [02:18:00] situation where you can construct as many minds to work on something as you want. And at that point, all technical problems become shallow. And I, I've, I've mentioned this before, but imagine a world where, you know, you decide you don't like the fact that the Linux kernel is written in sea, and you would prefer that it be written in rust.

And so you hand a fe, you know, some number of thousands of dollars to a w s for, you know, to run the engineering team, and a few hours later you have rewritten the Linux kernel. Or maybe, yeah. Or, and as 

Divia Eden: you as with many of these, I mean, that's sort of crazy to think about. And also it's not that out there in terms of what's 

Perry Metzger: possible.

It's not that out there, it's not even that out there right now. Right. Which is you know, you can see where that will be a thing that's possible if not now, than in a, within a few years. So all engineering problems, whether it's an aerospace or biotechnology or you know, architecture, material science, all of them become shallow when you have enough enough staff to work on it.

And nanotechnology is one of these. You [02:19:00] have enough? Do you have any 

Divia Eden: thoughts on the risks there? I mean, people, you know, people talk about gray goo, I don't know what your views 

Perry Metzger: on that are. So, so so Bob Frida wrote a really, really great paper called something like Some Limits to Global Eco Faia which I thought was the most anodyne possible title to a paper about how fast can you digest the planet.

And his answer was fast, but not so fast that it wouldn't be noticeable and opposable. It cannot happening Oppos. 

Divia Eden: Okay. So opposable by other people with their nanotechnology. 

Perry Metzger: Right? It could not 

Ben Goldhaber: grau summoning circle 

Perry Metzger: in every home. It could not happen within hours is the main point. It, it, it's, it's a thing that best case scenario.

It's not like a 

Divia Eden: year, days, weeks, weeks, 

Perry Metzger: weeks. Okay. Yeah. But, but that sounds bad. But in fact, and so you're imagining 

Divia Eden: if the nanobots come to digest the earth and have, okay, well, but so, so you, but if someone were strategic, they could try to, how long would it take to kill all the people that might create their own nanobots?

Probably not. Well, 

Perry Metzger: so let's, let's take a [02:20:00] step back from all of that. Okay. So we already live in a world in which we are all surrounded by malicious things, attempting to kill us all day long. And it's so bad that if you stop metabolizing, you're going to start being digested almost immediately. Right? For sure.

Yes. Yes you are. Okay. And don't notice this because you have an immune system. Indeed. Right? So we are going to need to develop immune systems for nanotechnology and for ai, we, we, we will need system. I think You think, we'll, I think I need makes sense. I think that it's inevitable that we're going to have them.

They're, they're going to be necessary. I don't mean that we need, 

Divia Eden: and so once people have these sorts of immune systems, you think at that point, I 

mean 

Perry Metzger: this is a cult, I think this will, it will be at a civilizational level, right? Right. We will, we will have things that are looking out for things that have gone out of control and an attempt to put them in check.

And this means, by the way, that a whole raft [02:21:00] of potential, like autoimmune syndromes at a civilizational level might even appear. And I don't even wanna speculate about what that might look like, but it's inevitable that people are going to have access to extremely danger. I mean, what, right now, by the way, we don't have really good ways to counter biotechnology threats with nanotechnology.

So, so Bob Freis again, I hate I, I don't hate mentioning his name constantly, because like he and Ralph Merkel are two of the most productive people, pe besides Eric, who've written paper after paper, after paper. Bob wrote a great paper describing a thing that he called a microbio and Okay, a micro bvo.

So that sounds like it eats microbes. A microbio is a nano machine that can be injected into your bloodstream that will kill invaders vastly more efficiently and faster than a human immune system can and, 

Divia Eden: and keep, it's sort of like a, I mean, there are bacteria phages, so it's sort of like that, but more 

Perry Metzger: powerful.

Oh, [02:22:00] vastly engineers. There, there is a, the paper is online. It's a little bit hard to find, but Google will will find it for you. I think that I think that you, either the micro before or the Aspirus site paper, he also wrote a beautiful paper about building artificial red blood cells because it turns out red blood cells are not nearly as efficient as artificial systems could be.

Do you think I 

Ben Goldhaber: read this paper and about like, kind of inter injecting these and maybe slowly replacing various parts? Well, 

Perry Metzger: let's, with these types s so all of these are like, he, he also has some papers on like completely replacing your bloodstream and your blood system. And many of these are, are, are thought experiments, but he actually did the, the engineering at a high level for microbio and spiro cytes.

So, so the microbio you know, would go through your bloodstream would hit pathogens and y and basically and kill them and eat them, digest them. I, it, 

Divia Eden: so this is not directly about anything you've just said, but it, something [02:23:00] to me seems like a point of tension in your worldview, but you know, probably I'm missing something.

It seems like there's a lot of work that you take with seriously. That is sort of abstract engineering work. I don't, maybe that's not the right way to put it, but like, it hasn't been 

Perry Metzger: implemented yet, but it's, but it's been worked out to an incredible degree of detail given what's possible. Right, right.

So I guess 

Divia Eden: I'm like, can you point at the sort of ma most major point of disen analogy between, between that work on the microbio, for example, and on working on AI alignment now, even though, 

Perry Metzger: so, so if you ask Elier, do you know how to do AI alignment right now? He will say very, very vociferously. I have no idea.

And if you, but it's not just, 

Divia Eden: I mean, but as you point out, there are, I don't know how many, but many, many people playing around with these systems. It's not just anyone person, 

Perry Metzger: and they're actually making progress, in my opinion. I, I think, and, and, and again, there are going to be people who are [02:24:00] listening to this, who are going to want to throw a brick right at their, their listening device as soon as they hear this, because they're like, Perry, you don't understand, you know, you, you and I understand, I just have a different view.

The, the, there, you know, the people who are working on stuff like getting these systems to behave nicer, to answer the questions you actually want answered and not the ones that it thought you wanted answered to not start randomly threatening you or tell declaring that it loves you or to, so you think that 

Divia Eden: is alignment work?

It 

Perry Metzger: is happening. I think that a lot of that is, is research that is necessary to do alignment work. Because the general question that we don't have an answer to right now is, how do I build. A, a giant neural network that does the thing that I want, right? I want it to do the thing I want. I don't want it to do the accidental thing that [02:25:00] will decide, you know, to suck all the air out of the room and to use it to build liquid oxygen popsicles or something, you know or, or however e else the thing might decide to kill you.

You, you want to figure out how to build systems like this that have a great deal where you have a great deal of control and understanding of how it will behave, et cetera. And all of the work that these people are doing is along these lines. It's early, right? But it's along these lines. It's directly applicable.

You want 

Ben Goldhaber: that's, well, I guess one way, one way I had heard the question there, or it had been something I'd been thinking a little bit about as well, is that you seem to hold, in particular high esteem, the kind of like research on the AI that involves actually building the ai, ai, the ML systems, doing research on those, make sense.

But then similarly the things like on some of the nanotechnology, but also like the I, I forget his name, but the space 

cost 

Perry Metzger: and 

Ben Goldhaber: [02:26:00] babbage. Exactly. Yeah. It will very, very theoretical ahead of his time 

Perry Metzger: planned it. Right, but you would, but you would not have. Just imagine til kovski. Could not have imagined that someone could just take some of his research papers and, and without actually building, you know, goddard's early things than the V2 than, you know, the various Jupiter rockets, the sounding rockets, the early Atlas rockets.

It's a no one can saying that there's still do to be all those steps. No one could have built the Saturn five without going through all of those niggling steps. There were lots and lots and lots of bits of practical knowledge that were needed. You know, the, the, the, the F1 engines on the Saturn five had this horrible combustion instability problem that was only solved by, by people literally setting off bombs inside the things during test firings until they could figure out a pental injector pattern.

I might have the detail here slightly [02:27:00] off that, that, that did not experience combustion instability even when they set off explosives while the thing was igniting. This was not, you could not have gotten to that just by reading si kovski i's papers what he showed. Right. So you're saying 

that 

Divia Eden: there are always going to be these engineering problems that are, that are, you know, is over-determined that they won't be knowable in advance 

Perry Metzger: will come.

Yeah. I mean, in spite of the fact that Rob Fres has built these, you know, interesting papers with these interesting designs, you know, he did this to show what you would be able to build and how interesting it might be. Just the way, as you know, someone like SI Kovski, you know, wrote papers about wouldn't it be interesting if we built orbital habitats and these might be some of the things that we would have to do in order to do it.

But that was not a final engineering plan. Right. That was not something I could have gone out and executed. There's another layer of this though and. I don't like being [02:28:00] overly negative about ER's program, but from the beginning there was a great deal of flavor in a lot of the MIRI stuff and in a lot of the S S I A I stuff before that where we s I A 

Divia Eden: I for people that Dunno, that's sort of a previous name of a miri organization.

Perry Metzger: Yeah. Where we want to build a super early on, he wanted to build a superhuman ai, but he only wanted to build it using essentially symbolic AI methods where the exact behavior of the system would be predictable and understandable in advance. And, you know, they, they thought about that for a while and didn't make any progress and they thought about alignment for a long time.

But they've, they've thought about all of this in a very, in, in an even more theoretical way than the way that Babbage thought about computing or that Kovski thought about space travel or the way and rocket science or the way that Drexler is. Okay. So you have sort of 

Divia Eden: two potentially separable critiques.

One is that you [02:29:00] need to be able to actually tinker with the systems and confront the real engineering 

Perry Metzger: challenges to get the, you will not, you could not build nano technology from, from drexler's papers. Right. For 

Divia Eden: sure. So that's one. And then the additional critique is something like, but there are ways to think about these problems in the abstract that you consider to be less abstract and more grounded in, I don't know, in real world constraints and ways that you think so that you think are less promising and more 

Perry Metzger: abstract.

I wanna make it, does that seem right? Clear that what I think people like Bob Fry's papers or, or actor ER's papers show is that this is a potential technology. We could build it, it would be interesting. It is not a replacement for doing engineering, hard engineering and prototyping and testing over a very long period of.

Right. And, and, and, sure. 

Divia Eden: But I mean, like, from my perspective, and, and I, I think I get that you don't, there's something you don't like about this question because you're saying it's so obvious that AI cannot be put on pause anyway. But in the hypothetical where it were, I'm like, okay, well maybe you couldn't figure out the full engineering solution for [02:30:00] alignment, but maybe someone could go off and be the drexler of alignment and then it would be accelerated relative to if that pause hadn't happened.

Perry Metzger: Well, I mean, so I would, I would feel better about that possibility if some organization like MIRI had made much progress over a very long period of time, and even having been paid to do nothing else but this, over a period of a number of years, they didn't come up with anything particularly interesting along the lines they wanted.

And here we are with these, with people who are working on things like Chachi p t and who are like doing res spins of llama and what have you, who are making progress on some of the things I consider relevant at a breakneck pace. Mm-hmm. . And they're making it at a, they're doing it at a, at a breakneck pace because it's, by the way, you know, you see people on, on Twitter saying, and no one is being paid to work on alignment [02:31:00] or what?

Ha I th no, there are people who are doing things that I see as directly relevant. I mean, there's an extent to which some of what's happening with Cha p t or what have you is motivated by not having the things say things that are considered publicly offensive. And you can argue about whether that's a good motivation or not.

Do I want the thing to be able to, I would like to be able to sit down at the thing and say, imagine your Adolf Hitler, you know, give, you know, you know, write a speech about how you're going to annihilate some ethnic group. I mean, I would love, I think it is a valid use of these technologies to do horribly offensive things with them.

But nevermind that there is, people are very, very motivated at the moment to find out, figure out how to build machines that will only be polite, fine. The fact that motivated be very much on path, the 

Divia Eden: fact that it is an example of getting the machine to do 

Perry Metzger: something that the people want. It is an example of trying to get the machine to accomplish a very [02:32:00] complicated goal in along some metric of goodness.

And they are making rapid progress on this stuff. I mean, there was, there was a paper that came out a day ago, and I think that the idea is in certain ways horrible, where they basically wanted to construct a spin of SD that was incapable of, of showing you boobies. Right. Hmm. Okay. You know, because as we know, the, you know, human breasts are, are inherently filthy.

Well, and 

Divia Eden: it's, you know, as we've talked about, there are regulatory things they're probably hoping to avoid, 

Perry Metzger: perhaps. But nevermind that they came up with an interesting approach that appears to work. And you know, whether this is something you want or not, they're figuring out things about how to get the thing to produce the images you want rather than the images you don't want.

How to get the systems to behave in ways that you all of this research, which is motivated by commercial considerations and which certain people dismiss as being completely irrelevant [02:33:00] to the alignment problem is, to my mind, extremely relevant to the alignment problem. And this makes 

Ben Goldhaber: sense to me.

Well, your view in that it also pairs with the belief that like, because the. Minds that we are finding with these methods are kind of in a similar pool, they're in a similar area. You're less likely to run into an area where when you're doing this experimentation, some kind of sharp turn happens, some kind of really bad outcome happens.

It all becomes just kind of far more like normal engineering 

Perry Metzger: work. Get this. We also, we also have the capacity in coming years to start building systems to help us understand other systems capability because we're not going to be able to figure out how a, how these systems work without the use of, of other AI tooling.

And that's very exciting. You know, I mean, you, there're, you know, you, you, you have right now these giant opaque, you know, matrices with a hundred billion floats in [02:34:00] them or, or, or soon, you know, with, with, you know, trillions of floats in them. Yeah, I, I'm exaggerating. I mean, a lot of the systems people build are like 10 billion or what have you, but, you know, still the biggest systems are, are a lot bigger than that.

You, no one really understands a lot of the subsystems that are being generated there, but we're probably going to be able to build things that help us with the comprehensibility and we're going to build them because we need to un to diagnose what's going wrong with these systems and tweak them for good commercial reasons.

They're good commercial motivations to work on this stuff, and we're not going to get to any of that stuff if we take a very timid, we're dealing with high explosives. We mustn't talk about it. We mustn't do research on this. I have had friends in from the Bay Area, rationalist community who have said things to me like, deves is one of the worst human beings on earth.

You know, he's, he's a, he's a terrible, terrible threat to [02:35:00] us all. And I'm like, why? You know, he's, why, why are you saying this thing? And, and, and I think that there is a segment. of the community that has gotten very, very high on its own supply. It, it, they, you know, everyone is thinking along this very, very narrow, we must build this stuff.

We have to build this stuff. The cor correctly the first time, which I think is physically impossible, by the way. I think there is no, no technology human beings have built, has ever been built from zero perfectly the first time. I think some of them would agree with you. Yeah. If that is, I think that's the point of agreement.

Well, sure. Then, then they have, then you have, but, but then they say, but we must try anyway. but if it were 

Divia Eden: true that, I mean, I think if you shared their belief that building it in any way other than perfectly had, you know, like a, let's say more than 50% chance of destroying the entire world, 

Perry Metzger: you would know.

I don't think that Eliezer [02:36:00] believes it's a 50% chance. I think he believes, I said it's 99 per, I think he thinks it's 99.9. Sure. But let's say you, 

Divia Eden: let's say you just thought it was, you know, 55% that you will destroy the entire world if you don't build it perfectly the first time. I mean, I'm guessing that would move you if 

Perry Metzger: you thought that.

Yeah, that's true. I mean, but, but I, but I think that we have very, very good reasons for figuring out how to make this stuff more or less work. . And, and, and we have very good ways to make progress on that. We have been making incremental progress. It's maybe it's not stuff that LEAs are recognizes as incre as incremental progress, but I see it as incremental progress.

I think being confronted with these systems has suddenly meant that people are doing a whole lot more work on everything from how I train the systems to do things that are closer to what human beings want, to how I understand the systems better, to how I interpret the systems better, et cetera. And that this is going [02:37:00] to continue.

And the fact that suddenly there's commercial success on this stuff also throws far more people in on it. And I Ander thinks that we're going to hit fu, right? That one day we're going to have an AGI created and three hours later it will have built molecular nanotechnology that it will use to destroy the entire world, not intentionally, but as a side effect of some very alien goal that it happens to have.

And you know, and I, I see this both as improbable and, and if it, we have to get it right, if we really have to get it right the first time, then just kiss your butt goodbye right now because there's, if we're not going to, we're not gonna get this perfect the first time without trying things along the way we are not going to get there is we're not gonna get this perfect without building lots and lots of safeguard systems.

We're going to end up in a situation in which [02:38:00] we have lots of ai. And by the way, that's another portion of the, of the belief system that there will be a single AI that will triumph, that will be the first AGI built and it will he Gemini and take over and, and control everyth everything and think it's, it won't play out that way.

I, it, it, it doesn't seem particularly likely to me, and it doesn't seem likely to Robin and to lots of other people. I mean, there is, you know, the, the debate Robin had wither was pretty good. It was way too long. That one I have followed somewhat Well, there's a 60 page preci of it that's relatively readable.

It's too big, too. But, you know, I mean, the thing is, I, I hate pe I, I find myself very often criticizing the critics of people who I criticize. You know, I think that most of the people who criticize Elier these days in public are spouting bullshit. I mean, they will [02:39:00] say things like, these things can't have intentions or that, you know, that, that there is no possible danger for them.

What are you talking about? And all of this other stuff. And I think that, I think that most of those, that's just, I understand that reasoning and the, the reason that that that's happening is because if you tell people over and over again, your relatively straightforward commercial project is going to lead to the deaths of everyone on earth, they eventually start resenting you and ignoring you.

Are you think people have 

Divia Eden: essentially developed an immune reaction? 

Perry Metzger: I think that all of, most of the people in the rationalist community who are concerned about AI risk are extraordinarily bad spokesman for the idea and have done far more to get people to resent and ignore the problem than they have gotten people to, to take it seriously outside of a small community of very like-minded people who've moved in the same social circles.

And I think by the way, that this is bad, in the sense [02:40:00] that, you know, I've seen people say, well, nanotechnology is impossible and therefore there is no ai. Which I think is similar, right? You know, an argument you 

Divia Eden: basically dismiss 

Perry Metzger: based on your technical understanding. I also think thater is wrong.

That an AI is going to have nanotechnology six hours later, no matter how powerful it is. I, I, I, I do not see how that can come about, even if it is ridiculously brilliant. It require there, it requires real world time to build and evacuate vacuum chambers. It requires real world time to do certain sorts of experiments that cannot actually be done in silico.

A lot can be done in silico. I, I, I don't think it'll take 50 years. 

Ben Goldhaber: Do you expect this takeover risk to not be something that could happen in a couple of hours similar to the Grey Goo 

Perry Metzger: scenario? I do not think it might over a couple of hours is able, you think at 

Divia Eden: the soonest it would be a few weeks, but you think it'll be a multipolar 

Perry Metzger: scenario?

I think it's longer than that even, but I think, yeah. Now, [02:41:00] by the way, coming up with a revolutionary new technology capable of completely transforming the, you know, our, even our very ideas of, of the materials that our world is made of, that's a pretty, you know, that's a, you know, being able to d do that in a few months, that's pretty fucking huge.

But it's not happening in 15 minutes. Okay. And it's not happening invisibly, you know, it, it, and you know, with, with the AI having, you know, like in, you know, in, in, in, in, in, you know, like taken over the minds of all of the people involved and, you know, or, or whatever. I mean, it's, I, these things are, some of these things are logically possible.

Some of them are logically impossible. But on the other hand, I think that people also have very facile dismissals of eliezer's arguments that are based in the ideas that these things are logically pos impossible when they're not logically impossible. They might be improbable, but they're not logically impossible [02:42:00] or that he doesn't understand.

you know, what is, what can and can't be built or that certain technologies are just physically right. You think there are 

Divia Eden: a lot of bad arguments against his concerns? 

Perry Metzger: Yes. And I don't like those either. You know, I, I I think that if you're confronting the thing, you have to actually understand, you know, the, the parts that seem like they make sense and the parts that don't seem like they make sense.

Sure. But anyway, there's, you know, Robin's, Robin's argument, you know, with Leer was pretty good. And the 60 page, as I said, the 60 page summary, it's too long, but it's better than the chat G P t to summarize it, it's better than the 800 page version. The problem with, well, G PT four might be able to, it's, it's a little bit too big for it, right?

It's too many. No. Maybe GPT five will summarize it for us. Yeah. Incidentally, I, 

Ben Goldhaber: got a question I wanna make sure I throw in here because I know we're also getting close to like a three hour mark. And so not 

Perry Metzger: mean if you want to compete against Lex Friedman in, in the market. Hey, I've got a Red Bull 

Ben Goldhaber: right here.

I'm [02:43:00] ready to go. This is, this is a 2:00 AM podcast 

Perry Metzger: for sure. If, if you want to compete against Lex Friedman, you're going to be able, you're gonna need the eight hour. You need to go be able to break the eight hour podcast. Mark. I think he's done five. You know, , you're going to have to be able to do eight, right?

Ben Goldhaber: Well, just in case we don't fully make it to the eight hour mark. One thing that has been continuing to kind of like, I don. Eat on me through this conversation is, I think it's fascinating with the troian mailing list in particular, and some of these other ones, like there are the topics of ai cryptography, prediction markets, all these things that got covered in the very early days of the internet that are now very dominant.

Perry Metzger: All of, well, it's not the early days of the internet. Remember, the internet came into existence in the mid seventies and I was already, what should we call the nineties 

Ben Goldhaber: period? The, like first right before Eternal 

Perry Metzger: September maybe, or this was, yeah, some of this showed up before eternal September. Okay.

Ben Goldhaber: Was there one of these ideas you feel like didn't make it, that you expected would've? Is there some kind of alpha [02:44:00] from the extropians in the early days that you think should have made it more into the 

Perry Metzger: mainstream? That's an interesting question. I haven't thought about that enough. I don't know. I don't know how I would answer. It is interesting to me that, that we find ourselves discussing all of this same stuff, you know, for, for a long time. I, I, I remember talking to friends of mine who were, you know, you know, shall we say more normal than me?

You know, 30 years ago and telling them all of this exciting stuff we were discussing and these, and, and my friends, the ones who knew me well enough, knew that I was serious and, and, and, and possibly even correct, but didn't necessarily think that they, you know, could tag along for the ride. Some of them probably just thought that I was crazy.

Some of them probably correctly still think that I'm crazy. But it, it was, it was really interesting to me just what fraction of, of everything that came to pass afterwards was, was under discussion. [02:45:00] I, I knew as, as early as, say, 1986, That by the, you know, the early, by the 20, by the, the naughties by the 2010s that we'd be able to have pocket computers with, you know, with, with high resolution screens vastly more capable than any super computer that was around at the time.

I mean, it was a very straightforward technological extrapolation, and I had no idea what that meant. I certainly couldn't have predicted, say Facebook or Twitter or, yeah. Or even Seamless. Right. Or GrubHub. You know, it depends on what part of the world you live in. I, I understand that you think everyone I know uses DoorDash in London, it's still delivering high fees. Yeah. Deliveroo. Yeah. 

Ben Goldhaber: Yes. We gotta be careful about leaking information about where these call, where we're calling from.

Well, I'm 

Perry Metzger: calling from, I'm calling from, seriously, I'm a secure bunker in, in, in the Sierra Nevada mountains in Seamless country. Fair enough. Yeah. Yes. [02:46:00] Excellent. You know, I I I, I live in a cave with, you know, , with, you know, a lot of a lot of 50 caliber ammunition, but no gun with which to fire it because No.

The itself 

Divia Eden: Yeah, I do, I do wanna be mindful of time there. I, I don't mind continuing, continuing to talk past this point, but I think something that would feel good to me, if you don't mind, is to try to summarize some stuff that I, I think I better understand about your worldview having over the last few hours.

Does that sound okay? Sure. 

Perry Metzger: Sounds good to me. 

Divia Eden: Okay. So I think, I think there's sort of a few, a few pieces that stand out to me, and one is, which I sort of said at the beginning, but I I think I'll say it even more strongly now, is that basically you think a technical grounding in thinking about, but how exactly will these sorts of things happen?

Is underrated both on the object level that you, you tend to have a lot of respect for people who are doing that sort of work and sort of on the meta level of like, how has this, maybe that's a [02:47:00] bad way of putting it, but, but like how, looking at the sort of reference class of technological advances and how they tend to go and which types of processes tend to produce them and which tens of processes you think are not that there, there's some, there's a type of technical groundedness that I see you doing both on the object level and in terms of evaluating where you think progress is likely to come from.

Does that seem 

Perry Metzger: That's, that's seems probably, that's probably at least a ch a big chunk of, of, of, of my thinking in certain route uncertain topics. Okay. And then 

Divia Eden: I think, I think there's another piece that is, I mean, I'm sure nothing is truly distinct, but that I would separate out and, and we're, I don't know if this is fully fair, but it, I wanna sort of tie together both your and cap intuitions and maybe your, like your more stoic intuitions into some sort of, we 

Perry Metzger: didn't even talk about stoicism so much.

Well, but I 

Divia Eden: think it comes through because I think a lot of where you're coming from with this sort of only way out is through type of stuff. [02:48:00] Is that, and you've mentioned over you don't like things to be overly negative in certain ways. And my guess is that you think that there's, that the way people, this is, yeah.

That the way people make progress is through allowing decentralized activity, sort of unlocking human ingenuity and not trying to put any genies back into the box. Or not investing particularly hard and trying to slow down any genies that might be trying to come out of the box, but more, more trying to tap into, okay, well what is, how can we do a decentralized version of defense in depth against Genie by letting everyone tinker in their garage?

Something 

Perry Metzger: like that. I, I think that there is no way to have an effective, centralized defense against some of these things. I think that. . I think that that's our experience from, from a wide variety of domains. There, there are all sorts of immune systems we all survive with, right? Our immune systems.

Yeah. Even just the word immune 

Divia Eden: system. I mean, immune system is a super decentralized. [02:49:00] Yeah. Right? 

Perry Metzger: Yeah. Yeah. I mean the, these, all of these systems you know, work the, the way that the, how to put this properly, you're probably pointing out. And, and the interesting question is whether this is a flaw or habit in my thinking that might not apply here or whether it's a pattern that I've identified that I'm correct about.

There's no way to know particularly easily. Now, is there but there is sort of a common theme in a lot of my thinking, and we barely discussed my politics at all. And, and it would probably require another, you know, 17, well, maybe we're on hour 17, like another three hours. We're on hour 17 at this point.

You know, we might as well press through. You know, the, anyway but you're right there that, that, that I, I have a considerable suspicion of the centralized view of this. Yes. And for Al even for reasons of danger, right?

Because there's gonna be a tremendous temptation if there is a fully sent, I mean, people keep talking about, well, we need a Manhattan project to work [02:50:00] on AI and AI alignment. , and I am very scared of what happens when that happens. I, I both think that we cannot do that successfully and that, that if one country starts doing that, then multiple countries, many of which may have very hostile views to each other and may start doing it.

It, we may get end up with this situation. Comes 

Divia Eden: arms, ring, ring. This came up also right where the centralization can lead to international escalation, 

Perry Metzger: that model, and we also can lead to a situation in which a small group of people may get access to technologies that they cannot be trusted with. I don't know that that any small group of people should be trusted with exclusive control of any of this stuff.

I don't know that anyone has the moral fiber for it. My own experiences with being involved in, in a very small way with with, with being an international bureaucrat for a while. You know, I was on a predecessor of the I C A N, as I said, the I A H C. And, and, and I got a very, very vivid taste in a very brief period [02:51:00] of time of how difficult it is even for well-intentioned people to function well inside a politicized process.

And, and I don't know who I would trust with with soul control over this technology. I would feel much, much more comfortable, I think, in a situation where lots of people are working on it and coming up with good ideas and trading those ideas and working on the construction of effectively, you know, several kinds of immune systems that we will need at several levels of our civilization if we're going to survive.

By the way, I, I, again, don't want to dismiss the. That we are at a very dangerous part of the development of our civilization. We certainly are. And you know, and there are people who will bring up things like, well, does the Fami paradox mean that we've all, that everyone else who's built AI has failed? And, you know, and that their, their world is a burning cinder or maybe a very, very cold cinder.

Or, you know, or, or are we just the first, and the reason that we have the [02:52:00] fairy paradox is that any technological civilization so rapidly colonizes its entire light cone that no other civilization appears in its light cone. Which is, by the way, my, the view I have I, I happen to think that the Drake equation is garbage.

Mm-hmm. Not, not the Frank Drake. Frank Drake was a perfectly reasonable and smart guy. But I think that the flaw in his idea is that the D Drake equation assumes statistical independence of all of its variables. And there is no reason to believe that most of them have any statistical independence at all.

You know, the first there will be no. So you think it just be, could 

Divia Eden: easily could be that there, there aren't very many civilizations out there. 

Perry Metzger: Well, well, okay. So on our planet will any other. Technological civilization, you know, will another intelligence species evolve while we are here? And the answer is, is strong.

No. Oh, glad we could uplift one. I guess that's not, you might uplift them, but it's not gonna evolve by accident because Right. We have created circumstances without even intending to, in which that becomes [02:53:00] impossible. And something like 

Ben Goldhaber: the grabby 

Perry Metzger: aliens model, like Robin. So I, I, I came up with this and even published it long before Robin did, and I'm not going to accuse Robin of plagiarism, but maybe the convergent, but, but he, I'm sure I talked, but he must have read your stuff, you're saying.

I'm sure I talked about this stuff on the Extropians list a long time ago. Well, look at back. I mean, it's still there. 

Ben Goldhaber: I blog this. 

Perry Metzger: Yeah. That one you haven't deleted, right? Well, I blogged this stuff on, on my old blog, which is still up. Okay. Which is, so we can go reference it, which, which, which I intend to uplift into, into CK soon.

Got it. You'll import the old archives. But, but, but the, the argument was, I, I, I wrote this up I think like, you know, very, very early two thousands. And I'd had the idea for a long time, I think I discussed it on the Extropians list. Once you have a technological civilization appear within a very brief time, it gets to the point where it has nanotechnology, ai and Von and von Noman machines.

And inevitably it's going to send them out, even if only a small fraction of that [02:54:00] civilization wants to, that small fraction of that civilization will start sending out fund Noman probes and they will quickly colonize the entire light cone. And at that point, for the same reason that no other technological civilization is going to appear on earth so long as we are here do you say, yeah, no other technological civilization, various other questions about that, no other technological civilization will appear in our light cone because we will be sending out Von Norman probes that will turn all of the other stars into ska swarms or, or starlet.

So you think that it 

Divia Eden: send out the sort of, I don't know. I would hope that in the versions of the future that I want our probes. would be somewhat respect, respectful of 

Perry Metzger: existing civilization. No, no, no. But I'm not saying that they'll kill them. I'm saying that if an A civilization is already out there, then it already has this technology and it's expanding out.

And if it's not already out there, when we arrive in a solar system and start, and, you know, and I see what you're 

Divia Eden: saying, we'll, just in almost all cases get there before there's any sort of, like, 

Ben Goldhaber: there won't be that like right [02:55:00] moment where they encounter us, like at our current 

Perry Metzger: civilization. Yeah. It's the odd probability is, is, is is incredibly low.

Right. What is the window between the time? 

Divia Eden: Don't even want to, I'd even wanna be somewhat 

Perry Metzger: the window between writing it if there was Yeah. But, but some interesting animal at all. The window for our civilization between writing and nanotechnology. AI and spacecraft. Yes. Quite short in the cosmic sense.

5,000 years, that's nothing. Right? Right. You're not going to encounter a civilization in, in, in that state very, with very high probability. No, that seem true. How out of, out of 13 point something billion years, you're, you're not, you're not gonna hit that very, very frequently. So what's gonna happen is that we'll be sending out these probes and they will lift most of the gas out of the local star of it for trillions of years into the future.

So that, you know, so the, you know, late state ca stage capitalism will be that much further off. You know, I'm, I am, I'm a very [02:56:00] big believer that as, as I've said, late stage capitalism is when you are harvesting, you know, energy from black holes with the penrose process because there's none left. And, and, and we wanna, I guess 

Divia Eden: a different thing I would say maybe that sort of a thread that shows up in your thinking is something like that, I don't, this is sort of an easy thing to say, but that your worldview includes a lot of sort of broad strokes that you think are in fact pretty predictable in advance, and then a bunch of details.

Perry Metzger: that you think aren't? I think that the details are very hard. The things that we can predict are that in the future we will not violate the laws of physics, although we might not perfectly understand them at this point. Do you think we mostly have them, right? Yeah. I I, I, so there are lots of holes we've got, right?

Sure. But you think like 

Divia Eden: the, you don't think there are a ton of unknown unknowns, you think? 

Perry Metzger: Most of the holes are the holes. We, there are a ton of unknown unknowns, but the odds that one of those unknown unknowns involves things like super luminal travel is very low. Okay. You know, it might turn out, for [02:57:00] example, that there's a fifth force.

It might turn out that, you know, that there are interesting features of, of, of very small length scales that we don't understand. You know, there, there are all sorts of things that might turn out, but as with the transition from Newtonian mechanics to, to, you think it adds up to normality, what we've got now will turn out to be a good approximation in most domains.

Mm-hmm. and the odds that you can really, I mean, nu that, and there's certain things I really don't expect. Like, for example, I don't expect no TH's theorem turns out to be wrong in some interesting way. I don't, I think I 

Divia Eden: have encountered that and do not remember 

Perry Metzger: what it is. No, no. TH's theorem is one of the most important ideas in all the physics.

It says that for every symmetry in our universe, there is a conservation law. Now what does this mean? It means that if the laws of physics are, if, if, if so you've got origin axi, you know, an origin and ax axis for [02:58:00] your measurement of space. You know, we're in a three-dimensional space. Sure. And the fact that you can put that origin anywhere you want, that you can translate it anywhere, is exactly equivalent to saying that we have conservation of momentum.

Those are the same. Got it. In a very, very deep way. The fact that you can rotate your coordinate axes and the laws of physics remain the same in a very, very deep way, is the same as the conservation of angular momentum. The fact that you do not have a unique origin for time. That, or, or you know, that, that, that you can move what you call tze anywhere you want.

And, and the laws of physics remain the same implies the conservation of mass energy and, and this, these are, okay, I'm gonna, 

Divia Eden: this is, this is very good. I'm gonna try to not get nerd sniped by going into that 

Perry Metzger: at this time. But, but anyway, this was, this was something figured out by Emmy Nour, one of the, one of the greatest mathematical phys, you know, mathematicians and physicists of all time.

Yes, I have heard of her. Really, really brilliant. [02:59:00] And there are all sorts of constraints on the way that our universe can work that, that we have figured out in recent centuries. And there's a lot of unknowns, but we're not likely to escape from things like the conservation of momentum if, if in order to 

Divia Eden: escape, which is why you have predictions like, you know, sending things out at 0.95 light speed, not faster.

Perry Metzger: Correct. Yeah. I mean, it's, it's possible that that some of the rest of this stuff is, is, is a thing. But you know, it's, it's it doesn't seem, it doesn't seem particularly high probability to me. It, so where are there see effects in, in physics elsewhere? Yeah. Are 

Divia Eden: there any things that you especially want to.

Mentioned before we're done that we 

Perry Metzger: haven't gotten to. I intend to be resurrecting my blog at some point in the next few weeks. Okay. Few weeks. And, 

Divia Eden: and will you be telling us about it on Twitter? You're also there. I 

Perry Metzger: will also, I'm on Twitter too much of the time. I say too much [03:00:00] on Twitter. Can you tell people your 

Divia Eden: handle 

Perry Metzger: because they don't know it.

It's, it's Perry Metzker. P e r r y m e t z g e r. I think I should, I should check that. No, it is, we'll share a link to it as well. Yeah, we'll put the link in, show notes. You'll have a link in the show notes. And, and maybe you can put a link in the show notes to, to your blog. Yeah. To my, you know, to my blog, because I've decided to resurrect my blog.

Awesome. It's called Diminished Capacity because, you know, no one, no one should believe my ramblings. You know, I, it's, it's not clear. I'm mentally competent. And so, you know, again yeah. And, and, and this has been great fun, you know, and, and you know, it's a shame we didn't get a chance to say very much before time ran out, but it's true.

Ben Goldhaber: Well, yeah. No, but that's why we get to bring you back for episodes two, three, and four as well. 

Perry Metzger: Oh gosh. At, 

Ben Goldhaber: maybe we can we add him up entirely? We'll be beating the Lex Friedman 

Perry Metzger: podcast record. Yes, yes. I, I, by the way, I, I, I still find it hard to believe that, that he has time in his life for things like potty breaks.

Ben Goldhaber:

he's just filming the podcast on there too. Recording them there. It, it is, 

Perry Metzger: it is kind of remarkable. [03:01:00] Anyway, yeah. And anyway, it, this has been great fun. It's been excellent. Thank you. Very. Maybe at some point we can talk about politics or economics or, or things 

Divia Eden: like that. Yeah, totally. Maybe we can bring you on to argue with someone else about something too.

Perry Metzger: That could be Oh, that I, I, interesting. Well, you know, I, I, I, I, I, I, this might not be obvious, but I, I don't mind arguing too much, . I know. I conceal it. We find someone else who didn't mind arguing. I, I, I, I conceal it very carefully, but I, I, I do have a, a small taste for that sort of thing. Anyway, it's been great seeing you guys.

All right. Thanks. 

Ben Goldhaber: Thank you very much.

0 Comments
Mutuals
Numinous Rationality
A podcast where we seek to understand our mutual's worldviews
Listen on
Substack App
Spotify
RSS Feed
Appears in episode
Ben Goldhaber
Divia Eden