Mutuals
Mutual Understanding Podcast
Ben Weinstein-Raun
0:00
-2:05:01

Ben Weinstein-Raun

Thinking with nuance about meta-ethics and the world.

Ben Weinstein-Raun

And so my guess is that that fairness seems like a plausible candidate for something like an alien species might have… kind of universal, like given some sort of assumptions about how evolution worked or works.

Like there is something happening that results in us making choices. And if your philosophical determinism denies that, like you're wrong. And, I think it makes about as much sense to talk about will and free will in making choices as it does to talk about a glass of water.

Ben Weinstein-Raun’s Twitter and Website.

Timestamps

[00:02:00] - Meta-Ethical Stance

[00:11:00] - Contrasting with Moral Realism

[00:26:00] - What EA misses

[00:47:00] - Fanaticism and Outside of Distribution Sampling

[01:28:00] - Building new Tools for Forecasting and Thinking

[01:44:00] - Actually using Tooling for Better Decisions

[01:53:00] - Guided by the Aesthetics of our Distributed Systems


This transcript is machine generated and contains errors

Ben Goldhaber: [00:00:00] And hi I'm going to maybe make a quick introduction to our guest here, and then we'll just dive into some conversation. Ben - who has a great first name - Ben Weinstein-Raun has worked at a number of top tech companies including crews and council, number of innovative tech research organizations such as MIRI and Redwood Research, and Secure DNA.

Ben is currently building a tool for forecasting estimation, and dare I say, thinking. Having known Ben for some time now, I'd describe Ben as a careful, discerning thinker on issues of not just technology, but also philosophy.

And so, yes, welcome to the pod, Ben. Yeah, thanks so much. Yeah, we're excited to have you. I was saying to Divia, one of the reasons I was excited to invite you on[00:01:00] was because of a really great conversation that you and I had a few weeks ago about. Naturalism and ethics and how it's put into practice and like all good conversations about philosophy, it was like 3:00 AM around a kitchen table.

And I felt like you really captured something important about I don't know, naturalism and ethics, and I really just wanna see if we could recapture some of that magic in our conversation today. Cool.

Ben WR: Yeah, totally. Great. Yeah, I'm really excited to talk about it. Awesome.

Divia Eden: I'm excited. I haven't heard most of this yet, so I'll get to hear it for the first time.

Ben WR: Yeah. Nice.

Ben Goldhaber: Well, lemme just ask for the start. How would you describe your moral stance?

Ben WR: Yeah, I guess, I guess at the moment, I would describe my, like best I don't know, my, my best stab or something. My best stab at understanding ethics as more like a pretty strong stance [00:02:00] on meta ethics. And not as strong of a stance on like, sort of object level ethics.

Where meta ethics is sort of questions about like where ethics comes from. Like why do we say moral sentence or like sentences about morality, sentences about good and bad and should whereas like ethics, like object level ethics would be sort of like, what sentences do we say about or like should we say about good and bad?

And like sort of what's true on the, on the direct object level there? I guess the, yes, so the meta ethical stance that I really wanna take is basically derived from an observation, which is that MetaEthical questions are under I think some pretty reasonable assumptions entirely empirical.

So whatever the truth of the object level, ethical facts insofar as they exist. [00:03:00] There's some reason that people go around talking about shoulds and, and good and bad and so on. And, and that reason is like entirely inside of physics. It's not mysterious. It's the kind of thing that we could figure out by like, looking carefully at humans and at human societies and at maybe, maybe game theory and math.

We don't need fundamentally different tools to answer that kind of question. If we're interested in answering like, the straightforward question, why do people say sentences like that? What is sort of the driver for humans having these intuitions and having these sort of discussions and, and and so on.

And so that's sort of like the, the, the meta ethical stance. And then I think it has some kind of interesting implications for taking this stance has an interesting implications for the more object level stuff. So I think the most obvious one to me is that it, it does [00:04:00] not seem like it lends itself well to sort of very simplified rule-based systems.

So I think it, it pushes me pretty far away from like a sort of total utilitarian kind of a view where you're aiming to make your object level ethical system very simple and make that sort of like this very beautiful sort of object that you know, is unassailable.

Ben Goldhaber: I'd love to ask a question there about that because I could see some people actually like in intuition that I feel like I have, when I hear that from a meta ethical point of view, everything is within this same kind of system and I suppose all derived from physics on down it. Lends itself to thinking about things in a very maybe not very, but like I could see it applying in like a simplified manner.

Like thinking okay, well because it's physics and because it's something knowable, we can construct theories about [00:05:00] it that are simple and derive into this kind of utilitarian calculus. But it sounds like you're actually pushing in the opposite direction and saying, no, it's much more complicated.

Ben WR: Yeah, I think basically my sense that like you get something complicated is not like solely based on the idea that it's from that meta ethics is like empirically addressable. It's also based on sort of like observing what is going on when people talk about good and bad and should.

I think it seems to me that if you want to have, like, if you wanna have your meta ethical stance basically look correct or like consistent with the evidence I think you need it to include some things, which to me do not. Seem like they would come from an approximation to utilitarianism.

You need it to include things like I don't know. So, so Jonathan Hate has this [00:06:00] book oh shoot. I'm gonna forget what it's called cuz I'm being recorded. But it's the Jonathan Hate book. I feel like there's one Jonathan hate book people quote in this situation. No, it's definitely the out one.

Ben Goldhaber: There's,

Ben WR: let's look out and then we can, and then you can go back and say that. Yeah. Yeah. Ok. Is it the Righteous Mind? I think that is it. Yes. Ok. Okay. So, so Jonathan Ha has has this book, the Righteous Mind where he sort of goes into like doing, excuse me, a lot of the sort of like empirical like analysis of what is going in, what's going on in people's minds when they're having this sort of like set of ethical intuitions and conversations.

And he comes out with like several factors like I think, I think it has like five sort of like key key things that, that are sort of factors of people's morality. And one of them like, like an especially important one and one that like, I think is especially important to me is [00:07:00] sort of harm focused morality.

Which is I mean, it's quite widespread, but it's not like the only thing going on for almost anyone when they're thinking about sort of like what's ethical. So,

Ben Goldhaber: So there are like multiple different things going on when people are calculating, thinking about what's ethical and like a certain re reductionist point of view.

That's just looking at the harm's point of view is probably missing a lot of things. Is this one way to put this?

Ben WR:Yeah. Yeah. I think that's basically,

Divia Eden: yeah. Can I, I just, I just pulled up what the different ax axis, the five one s he has can, do you mind if I say them? Yeah, yeah, totally. Okay. So he has care, harm, fairness, cheating, loyalty, betrayal, authority, subversion and sanity degradation.

I have some, at least that's what I found when I looked it up. I have some imperfect memory that he added, like freedom in there later because he found that that was important to some people and it wasn't originally

Ben WR: covered. Yeah, and I think, I mean his I think he also has sort of generally this take that [00:08:00] there could be a, a bunch more, and these were just sort of like the most obvious when he was sort of like going through like the available, like the available evidence.

And so, yeah, I think, I think it's, it's not clear that that is like a complete list, but I think it is clear that like your meta ethics has to sort of explain all of that. And it's not impossible, I think to, to end up with an explanation, like from my current standpoint, it's not impossible that you would end up with an explanation that sort of like is more or less simple like utilitarianism but it doesn't feel like what you're gonna learn if you like, started off sort of like with a clean slate.

Like just sort of examining humans sort of a, as some, as like a species that has these sort of intuitions and and ways of functioning in a society. It just doesn't seem quite like, it doesn't seem like that's gonna be like near the top of my like, list of plausible [00:09:00] explanations. I want to kind of like almost pop back for a second or maybe double click on the word meta ethics. Cause one thing when I hear that I think about it is like selecting among some set of ethical philosophies. Is that how you mean

it? Not exactly. I think, so when I say meta ethics, which may not be quite what people like, typically mean by meta ethics when they're like professional philosophers.

I mean something like Like, what are the sources of our, like ethical intuitions and our like the sentences that we say about ethics? I think that like, some, like, I don't know. It's, it's the sort of like, it's the domain where like you might a wonder about like whether moral realism is true.

Like whether there [00:10:00] really is some kind of like, like objective moral truth. Whether, you know, is moral realism real? Yeah, that's a good question. I don't know. I mean, it's a, it's a real concept although I think it might actually be a lot of different real concepts. But yeah. Are you

up

Divia Eden: for saying what the different, what some of the different real concepts might be?

Ben WR: Oh yeah. I mean, so I think so in the like local social sphere, I guess like people use the term mo moral realism in a way that I think is not quite the same as the way that philosophers use the term. Excuse me. I think philosophers, when they say moral, moral realism, they mean something sort of broader.

Like is there any sense in which like, you know, morality is real or like good and bad or real or, or anything like that. And I think that that admits a lot more ways that things can be real than [00:11:00] often people are imagining when they're saying moral realism. So if you're, if you spend a lot of time around ea you might like, like.

Come across disagreements where people are sort of like talking about moral realism versus moral anti-real. And the anti-real are saying like, are are sort of, they're taking this view that like there's no supernatural, like the universe doesn't care. It's sort of like a combination of like, I dunno, they, they think it's like maybe morality is subjective.

They think it's like, maybe not like aliens might not have the same moral systems as we would. They think that like, you know, there's no sort of like underlying supernatural like thing that is that is mor morality or ethics. Whereas the, the realist think that there maybe is some kind of a thing like that.

Like the, the, I don't know you might get sentences like the arc of history bend toward justice or like, you know, [00:12:00] I only care about the worlds in which morality is real because like the other ones have no value according to me. And I think that like, it is pretty, I think this sort of like this sort of like splits too many things into those two categories.

And I think I, I would, my current sense is that like there is a sense in which morality is real. I, I don't think that it's supernatural. I don't think that the universe cares about morality. My guess is that some of it is kind of universal. Like if you were to like, go find some alien society, they would like share some of our moral intuitions and probably some of it is not.

Ben Goldhaber: And and you suspect those universal features are derived from shared evolutionary

Ben WR: patterns. Yeah. Shared evolutionary patterns, game theory basically the sort of [00:13:00] thing that like might be common between our culture and an alien culture. Can you give an example of some things that you, where you think the aliens would probably have the same moral intuitions?

Yeah. I think one thing that seems interesting to me is that like fairness seems like it's quite common as like a, a sort of, i, I don't know if it's quite fair to say moral intuition, but it's like, it's a common sort of motivator across lots of different animal species, like not just humans. It, it seems like sort of quite like an early thing to, to like evolve in terms of like Something like things that sort of look like morality.

And so my guess is that that seems like a plausible candidate for something like an alien species might have or something that would look sort of like how we would think of fairness I think, so fairness seems like a, a strong candidate for something that like, might be kind of universal, like given some sort of like assumptions about how evolution worked or works. I [00:14:00] guess it does seem to me like something like sort of like slightly weirder decision theories is also another plausible candidate where like, If you have a decision theory that isn't just c d t, sorry, caus causal decision theory like that is potentially going to help you coordinate with other people a lot better or other like members of your sort of species or people who you sort of like see as similar to yourself.

And so I pre predict that things like that are also gonna be quite common in, especially I guess like social animals or like, yeah, I guess, you know, generalized animals.

Ben Goldhaber: I'm still kind of thinking about something that you mentioned around like why utilitarianism doesn't seem like a good approximation of the meta ethics you might endorse.

[00:15:00] And I think you answered this in a way, but I didn't quite gro it around the point of like Jonathan Height and the like different ax axis. And I'm wondering if you can say a little bit more about it. The way I'm like kind of thinking about it is something like maybe you can't like, make trades between those axes in some way that all sums up to a number or something that the like stereotype view of utilitarianism might have.

Yeah. Is that somewhere in the ballpark? Or, or maybe just say more about that.

Ben WR: I do think that there's some, like, you probably could mathematically describe a way of making trade-offs because like, you know, obviously you, you sort of like can't Like, I, I mean, it's gonna be hard to construct a system like that where it's not, where you're, where you're not allowed to make any trade offs.

But I don't think that's necessarily gonna be like a simple sort of like add up all of the, the field and like, you know, with different weights or whatever. I think [00:16:00] some of it is gonna look not very consequentialist and look sort of more like, are you being honest? Like, is that like, you know things that don't have direct like, I don't know where the, where the morality of the thing does not like route through its effects.

Right. If that makes sense.

Divia Eden: I, yeah, I was trying to think of an example of this in my head. And so I guess one of the, like, I took one of those eight quizzes years ago and there's some question that, you know, I probably, a lot of people I know I'm somewhat low on, maybe a little higher than I used to be as I get older, which is something like sanctity or it's like, how bad is it to, I don't know, like play a game of cards in a graveyard or something like that, where like it might seem in, you know, there are no there are no concrete consequences that can be easily tracked at least.

But a lot of people have some sense that this is wrong. And maybe what you're saying is like, you know, if I'm imagining some pole that's like how many. I don't know, like how many, how many dollars would you have to give [00:17:00] to the against Malaria foundation to make up for playing a Rockus game in a graveyard?

That this, there's something Ill-conceived about this. Is that sort

Ben WR: of what you're getting at? Yeah, I think that's basically right. Where like, it's not, it's not quite clear that there's like no trade that you could make or, or anything like, quite that extreme. But I think it is sort of like the question is like not actually giving you the information that like, might be needed to answer the question.

For example like it might depend a lot on your state of mind, like when you were playing the game, it might like, you know, it may just be that there isn't really, like the thought experiment does not actually like, give you the relevant details. But that doesn't necessarily mean that there isn't like any kind of trade off that you could make at any given situation or that, like, there aren't details that could be given.

But I think it does mean, I don't know that a lot of these yeah, that, that it's like, it, it's not going to end up [00:18:00] being a very, very straightforward kind of a, like a function from like the state of the universe to like how good or bad it is. I think I especially expect it to not be a simple, a simple function of something like summing up all of the positive experiences.

So, so I think, I think I find total utilitarianism especially suspect as a, like object, the let system it's not crazy to me to have like a utility function. I think that's like maybe the kind of thing that you, you can still have. It's just that I, I don't expect it to be a very simple one. And I think I might expect it to sort of have like terms about what your mental state is and not just terms about like what's out in the world.

Ben Goldhaber: Right.

And how do you maybe personally or philosophically relate to this? Is it something where you try to hold like multiple points of view and like make decisions from that? [00:19:00]

Ben WR: Yeah, I mean, so since I had this thought a while back, I've been thinking more in terms of like, what would my, like, I mean, it, it's a little silly to say to me, but, but I've been thinking a lot in terms of like, what would my grandfather do?

Or like, what would my grandfather think is the right thing to do? Oh, that's nice.

Ben Goldhaber: I like, I just like that immediately, but please continue. Yeah.

Ben WR: And I guess maybe similarly thinking about like, What seemed to be kind of like universal features of moral systems. So I think it, it has caused me to sort of like up weight being honest, like because I feel like that is like quite a common, like admonition.

It's also like caused me to up weight, like the golden rule roughly. Like, you know, treat some kind of thing vaguely shaped, like treat people like you wanna be treated. Maybe I mean there's lots of different ways you can sort of like add epicycles to make it [00:20:00] better cuz there are obvious problems with it.

But but using that kind of thing where like, I, I just, I I have observed that like lots of different cultures and lots of different groups like seem to value those things. And that, that seems like if you, if you were to take a, the set of things that are like that, that would sort of be a minimal collection of like what might be considered like human morality, right?

Ben Goldhaber: Try to shoot for like a minimum rule pack of human morally. Yeah. And then layering on top of that, does that kind of come from your personal exper or maybe, maybe like, you're not supposed to almost like add more things on top of that and what do you

Ben WR: Yeah, I, I think I mean, so one question here is like why if I have this view, if I'm like, okay, where like ethics is all sort of like explainable by physics, whatever, like why should I find it compelling?

Like why would I want to be moral? Mm-hmm. Mm-hmm. And I think that in some ways I. Shouldn't want to be like, [00:21:00] like absolutely moral. However, I do care about the systems that I'm embedded in. Like I, I care about like the world that I'm in and like seeing it continue to, to thrive and continue to exist.

I like, you know, I care about my friends and, and all this stuff. And my sense is that like some of morality is going to be tied up with like the preservation or you know, creation of systems like that. And, and probably quite a lot of it. And some of that is gonna be directly helpful to me.

Like it's gonna be, it, it's gonna be derived from like patterns which like helped other people with those memes to like get more whatever, like basically propagate themselves or their memes. And some of it is gonna be like, it helped the societies that those people were in. And I think both of those can be like quite compelling reasons to want to be moral.

And so I [00:22:00] guess that's sort of another way that I like another sort of source of like maybe layering on top is like, can I sort of like figure out. How this is like, I don't know how this fits into that kind of picture. And like, like also sort of like helps to some extent like eliminate aspects of morality that I like, don't think are as important.

So for example, I think like believing in a particular God to me is not the kind of thing that I expect to like, come around to thinking I should do. Just because like, it, it doesn't seem like, I don't know, there are too many different choices there. It's like not, it's not like really well justified.

Any particular one. I might be like more open to the idea that like maybe I ought to pray or something or do some kind of like sacred ritual which seems sort of more common. But yeah, I think [00:23:00] my sense is that like, insofar as that would be helpful to me or the societies that I live in, like there are better things that I can do with that belief, like believ through things instead.

And so I'm not especially inclined toward that.

Ben Goldhaber: Yeah. It's funny that you started mentioning God and religion in this because in part of what you were saying, I feel like I was hearing real. Echoes and shades of Cs Lewis's concept of the Dao, like some set of rules or principles behind civilization that seems to be universal and that he tightly coupled with both morality and the Christian faith right.

And yeah, I, I, I certainly see like the way also in which, in you're bringing this into almost like a game theory kind of mode of like justifying it through like, all right, well these are the principles and rules, but with civilizations and people flourish. These are the kind of underpinnings of morality here, and I should look towards

Ben WR: those.

Yeah. Yeah, that feels really right to me.[00:24:00] I mean, yeah, I, I'm not sure if I've read the specific CS Lewis thing that you're re referencing, but I have read a, a bit, and I think basically I, I, I often feel like I'm with him like 70% of the way, and then like as soon as he starts being like very specific about why it's Christianity in particular, I'm like, Hmm that doesn't really make sense to me.

Mm-hmm. But I do, yeah, I do resonate with a lot of the stuff that he says about like, I mean, sort of like Yeah. The kind of thing that you're, that you're talking about.

Divia Eden: So I, I have a question about when you were saying, like, it almost sounded like what you were, when you were describing it before, that the grounding of why, to, why, like the motivation to follow these moral principles was to be helpful to you and the societies.

You're, the systems you're embedded in, but then that starts to sound more consequentialist in a way that I think you don't mean, and so my guess is that it's sort of hard to use language. So talk about this, and that's kind of what's up here, but I, I guess I wanted to like, if you could expand on

Ben WR: that point a little.

Yeah. I think [00:25:00] like, it's not so much when I say that, like, I, like why do this thing? I think it's actually, it is actually a bit consequentialist. It's just not, it's just not sort of like morally consequentialist. It's not that like, I think that I ought to do the thing which causes the best outcomes.

It's that like I separately like happen to want good outcomes. And along with a lot of other things that I want and like wanting good outcomes, like I think leads to wanting to be moral by sort of like observing that I'm like embedded in these systems. And that I like, you know, there are these apparent rules by which people seem to try to steer the systems to like whatever, like better, more productive aims.

Ben Goldhaber: Is there a moral principle or? Universal feature of some of the groups that you [00:26:00] think maybe is absent from one of the societies or groups that you're in, like something you think they should be doing more?.

Ben WR: So one of the things that I think feels really important to me about this is that I, I agree with effective altruism, like the movement on a lot of like, key points that most people disagree on. And I observe that ea seem to make sort of classes of mistakes that I think that they would not make if they understood this like this general sort of like direction of thinking at least a little bit better. So so like S B F I, I think like really is like a true believer. Totally utilitarian. And he's a smart guy. And, and I kind of think that, like, I, [00:27:00] my sense is that like you.

You can sort of only end up doing things which like, like the, the kind, you can only end up making the kind of mistakes that it seems like SPF F and others at FTX made. I think by sort of neglecting some of these sort of more traditional like ethical principles, like, like try to be honest, try to be scrupulous with people's money.

You know, things that I, I guess it does seem like like, I don't know. I mean I, I, I really respect his and, and also Caroline Allison's commitment to their, like, moral principles. And I, and I don't, you know, maybe controversially, I, I don't actually take Ft X's collapse and like fraud as that much evidence that they like abandoned those principles.

In fact, I, I, it seems quite [00:28:00] consistent with those principles to make that kind of mistake. And I think, I think it does feel really important to me insofar as this is like a correct observation to like point it out to the people who I think are trying to do as much good as possible so that they like, might not make mistakes like this in the future.

That makes

Ben Goldhaber: sense.

Divia Eden: Yeah. So can, like, can you be more specific about what it is that you wanna point out to people?

Ben WR: Yeah. I think something like, like your, my mainline guess, and I think a very reasonable mainline guess about sort of like the metaphysics of the world is that basically physics is what's going on. Like, you know, the universe is sort of more or less mechanical at, at the bottom layer.

And that everything that we say and do has perfect explanations inside of [00:29:00] physics and therefore like can be explained by looking at like by, by basically just being empirical. So I think a lot of people who I know have this sort of pseudo like sort of pseudo supernatural, like the people who I think of as, as, as thinking of themselves as moral realists in the ea sense.

They have a sort of pseudo supernatural view of like, like what it would mean for there to be like, like real morality in the world. And I guess it seems to me that there's a whole, there's like a category of belief that one can have that is not. Like almost inherently disconnected from the truth of the matter of that belief.

And there's a way in which if someone is like, ah, yes, I think that there is some sort of [00:30:00] supernatural like, goodness thing. And like, you know, it's gotta be the sort of beautiful, most beautiful, simple thing, probably utilitarianism. I think there's a way that that belief, if it were true, like there's no route for the truth of the belief to influence the belief.

And if it were false, there's no route for like, the falseness of the belief to, to, to influence the belief. So it's, you're saying it's unfalsifiable, it's not exactly that, it's unfalsifiable. It's that like there just isn't like if, so physics is probably causally closed, like in insofar as like anything can be like there, there everything that happens in physics, including all the things I'm saying and all the things I'm like experiencing and we're talking about like have perfect explanations in terms of physics.

And so if you're gonna pause it, something outside of physics that thing, like there's no route because I have perfect explanations for everything in terms of physics. [00:31:00] There's no route for anything outside of physics to influence my belief. I think. So it's different than falsifiable because like it might be the case that like, I have a belief about.

Like how evolution happened that in fact, I will never get enough information to like, know the truth of, and that's actually like a, I think that that does not fail this test, but does fail the falsifiability test. Where like, at least that belief, like there is some way that, that like could in principle, like be connected to the truth of the matter.

And yeah, I think that's, that feels kinda like an important distinction

Ben Goldhaber: and I'm still kind of puzzling on this. Like, so like maybe some folks have this kind of conception of some thing outside of the realm of physics that is the source of like moral truths. And your point is many people [00:32:00] in the EA space don't necessarily think that is God, but that they still treat it in some way like that.

And you'd wanna bring that back down into the realm of physics while still holding. Also hold, also believing that there is some source of truth that is beyond like, or that is like somewhat universal.

Ben WR: Yeah, I mean, I think that, like, that source of truth can be, and in fact, like I think more or less, the only source of truth as far as I can tell is like things that exist in the world.

And like, and that seems like a, like a totally reasonable place to me to look for like true morality is just like, yeah, try to figure out what it is we're talking about when we're saying these sentences. And so it's,

Divia Eden: it's part of where you're coming from that you think it's sort of, it's, it's appealing to people to create something elegant in their minds and give it, like, elevate it to a special place that's [00:33:00] ultimately ungrounded.

And is is part of what you're saying that with your meta ethical stance is that no, that's sort of unjustified and people actually need to do the work of looking at the world and grounding their ethical intuitions. Does that seem, is that closer?

Ben WR: Yeah, I think that's almost exactly right. And yeah, the, the only place where I guess maybe I would like slightly modify it is something like it's not, it's not clear that you like even have to ground it all the way out.

I think it just sort of like, it, it's going to be like, I think it's important to keep this in mind and like let it influence your like, Probability distribution over like what the truth is. Like if you are, if you are as I am, like, you know, relatively confident that like physicalism is roughly true that should impact your beliefs about why people say sentences about good and bad.

Imagine I sense imagine people. Yeah, yeah, yeah. Sorry. Yeah. Like if I imagine [00:34:00] my sense people first go, but yeah, go ahead.

Divia Eden: Yeah. If I imagine you say, you know, hypothetically you're running a bank, you have some customer funds, you're deciding what to do with them. Can you sort of walk me through how your physicalist meta ethical stance would infor, like, can you, can you sort of lay out the steps for that from then what you do with the customer money?

Ben WR: Yeah. I mean, I think for one thing like I think it, it pushes quite hard on. Things like honesty which I think as I sort of mentioned before, like my sense is that honesty is like like fairly universally seen as like, you know, like the ethical thing to do. Like all other things equal. And I guess it's not necessarily the case that all of the actions with the money will, will like be dic dictated by an ethical system.

Like it might be that I like, have a lot of leeway over like exactly what investments to make. Like maybe I would like to just [00:35:00] maximize my return. Like, that's probably ethical as at least assuming that I'm like being honest about that with with the people whose money it is. Maybe it's that I like want to invest in things that I think will make a positive difference in the world.

It's not super clear to me that like that like this is like sort of required by an ethical system. My guess is that it is probably like better to do things that are better, but I don't think, my sense is that like the true moral system is not extremely demanding. If that makes sense. Partly because in fact humans mostly are not fanatics.

And things seem to work out basically fine and in fact kind of better when they're not fanatics. And so I don't expect like to find that like the true moral system insofar as there is one, [00:36:00] or like, maybe there is one relative to me or relative to me in my situation or something like is going to make demands of me.

Like, you must find the most, like the, the best possible thing to do with this money. And like, I guess it seems to me like it's, it's totally, it's likely to be very compatible with like many options especially ones that are sort of like being what is recognized locally to be like like rather like fulfilling the role of a good bank manager or whatever.

Insofar as that's like what people are expecting from me. And that that may not be like I don't know, it may or may not be like involves sort of like maximizing profit. It may or may not involve sort of like, you know, being lenient with people on their loans or whatever. I dunno, I'm not sure if this is a good a good answer to your question.

[00:37:00] I

Ben Goldhaber: like it because it did help me grasp something much worldview here and then also kind of what I feel like is a common. Well, I'm not even sure failure mode, but like a tension point. I feel like with a lot of the philosophies that we talk about, which is like on the margin, what's the next action that you're gonna take and like how much should it be influenced by some other thing, like on the margin, are you going to like, donate the next dollar to a M f or Miri or some other charity?

And then there's always this question of like, well, why am I not donating the next dollar and the dollar after that? And I guess what I'm hearing you say is something like, no, we should kind of resist or at least be very skeptical of this idea that ethics can be a universal operating system for your choices.

Is that an accurate kind of statement?

Ben WR: Yeah, I think I think basically it, it, yeah.

I think I do wanna say something like, [00:38:00] I don't think that the, like insofar as there is gonna be a true ethical system, I don't expect it to make a prescription about every action. Hmm. I expect expected to make prescriptions about lots of actions and to like, you know, strongly push against some and strongly push in favor of others, but.

I don't think it's gonna be like, you know, like, I think, I think there is something kind of wrong with the sort of the like obligation framing, which I guess a lot of VAs sometimes talk about with respect to ethics, where I think like the obligation framing of a particular ethical system is like basically going to lead you into fanaticism for basically the reason that you're talking about.

Like, ah, yeah, well I spent my first, you know, 60% of my income on am m f I guess maybe I ought to spend the [00:39:00] next 30% too. And then, I don't know. And my guess that this just doesn't actually work that well for like, you know, building a society like, like actually in fact causing the most good outcomes if that really is what you want.

And I mean, in particular, I just, I just think it isn't probably good like making reference to what I expect to, to find about like what good and bad mean. Mm-hmm.

Divia Eden: I, yeah. I think the part that I'm most trying to clarify in my mind is something like the step from the sort of non supernatural, empirical view on morality to then you are sort of looking, there's both some question about what actually makes societies work.

And then there's some sort of additional sense that looking around at what seems to, which principles seem to people converge are convergent seems like a pretty good source [00:40:00] of information about what the true morality is. Mm-hmm. Am I, does

Ben WR: that seem right? Yeah, yeah, totally. Yeah, I think that, that seemed like it basically hit the nail in the head.

Like there sort of is, my sense is that there are, like, if you take this, I guess the way that I would phrase it or something is like there's sort of a, a. First step, which is like, you notice that you can answer these questions empirically, and then there's like a second step. And I've, like, I, I think it's easy to take the first step.

The second step is hard, which is like, okay, now we actually have to do the empiricism and like actually try to figure out what's going on. And I don't think that we've like, succeeded at that. I, I don't think that I can just like go crack open Jonathan Hate's book and like read out what the true morality is.

And like, I think it is in fact a quite difficult project. I actually think my sense is also, this is basically to some extent like the project of like the original [00:41:00] sociologists. And I think sociology in the meantime has sort of gotten like, redirected to other things. But my sense is that, that they were really interested in sort of like cultural universals and specifically around sort of like sacredness and and like morality and religion.

And so I think my sense is that like people have sort of like made these sort of like halting steps towards the second step in this, in this sort of program. Like enough that I, I feel like there, I can say more than nothing about like what I expect to find, but I don't think that I can say anything with all that much certainty.

Like I think like probably 80% of the things that I've said in this, in this like conversation so far I feel very un uncertain about really. And like I would be unsurprised to learn that they were basically wrong, but some of them Yeah, you go, yeah. And maybe that. Yeah, and [00:42:00] I think partly I, I'm even like open to the idea that in fact moral realism is false and like there is no sense in which morality is, is like a real thing and that like, you know, there's sort of this like non-cognitive view that like, we're basically just sort of making emotional expressions when we talk about good and bad or something like that.

And that there's like no sort of consistent, real thing there. It's, it's like pretty strongly not what my guess is. Like, I, I think there's like enough things that I can point to where I'm like, okay, but that thing definitely exists and like, it seems like it's part of what we're talking about. But but I don't know.

I, I, I'm open to being like wrong about even that.

Ben Goldhaber: It strikes me that some embrace of uncertainty or toleration of uncertainty is kind of central to your worldview

Ben WR: on this. Yeah, totally. I think there's a way in which, so some is talk about like like, like moral uncertainty like I guess especially like Will McCaskill has written some stuff [00:43:00] about it.

And when I first read that I was sort of like like, or read about moral uncertainty. I, I had this sort of like visceral yuck reaction to it where I'm like, ah, but like I feel like it's not really like he's describing it as sort of like, oh, well maybe like 30% on utilitarianism and 30% on like virtue ethics or something.

And I just kind of get the sense that that discussion is often not grounded in like, how you might actually answer the question. And like, like trying to constrain your predictions like so that they match as well as you can, like the evidence that's actually available. And yeah, I think it is, I think it is basically, it feels really key to me to have something like moral uncertainty but to in fact ground it in.

Like what, like, I don't know how you, how you would, how you would expect this kind of [00:44:00] exploration to, to, to play out.

This

Ben Goldhaber: kind of exploration. Could you say more there?

Ben WR: Yeah. So like step two, I guess, where like you're sort of going from the observation that probably like all of this stuff is empirically like sort of discoverable, right? And then in fact trying to go and discover it, so, right, right. Yeah. Like trying to figure out why it is that people have like fairness intuitions and like you know, things like this.

This makes a lot of,

Ben Goldhaber: I, I sense to me, or at least it's like one of the things where I really enjoyed talking with you about this was some sense of like, yeah, there's something here that feels like both very true and very humane, almost like kind of human-centric. While also as soon as I get to that word I'm kind of thinking like, oh wow, there's gonna be a, there'd be a lot of fighting.

There would be a lot of different points of view on this. And I imagine that's like some of the appeal of some other like, Modes where you can be like, no, we, we have the answer. [00:45:00]

Ben WR: Totally. And like I think, you know, I think there are a lot of like benefits to making simplifying assumptions. I'm not necessarily saying that people shouldn't like, use something like utilitarian calculus as like an input into their decision process.

But I don't think, I think that that, that people are sort of I don't quite wanna say diluted, but like something like diluted into like thinking that's please say diluted.

Ben Goldhaber: We need more good clips to be able to kinda like, get some spices in, you know, via

Ben WR: totally Twitter fans. Yeah. So like, I guess I have a sense that people are sort of diluted into thinking that that's all they need to do that that's like, in fact the whole deal.

And as long as they've done that, then like they're doing as much good as they can do. They're, they're, they're sort of unassailable morally. And I think that is not really true. I, [00:46:00] I mean, I think it's both. Yeah. I think that that point of view is both like too harsh and too, and like not harsh enough.

Mm-hmm. It like excludes a bunch of things that I think people sort of in fact ought to be paying attention to and aren't when they're sort of in that mindset. And also, yeah, it, it like does this sort of like pushing toward fanaticism thing and like pushing away from like sort of having a, a healthy life?

Divia Eden: Yeah. Can you say more about why you think fanaticism is not good?

Ben WR: Yeah. I mean, I guess my sense is like if I have a bunch of, if I think about like the central examples in my mind of like fanaticism from history they, like the fanatics often end up in the historical memory as being like a. Basically the bad guys is, I think one, kind of, one part of it.

Or the people who, like, were more fanatical of [00:47:00] the possible, like choices seem like they're sort of, they, they sort of end up seeming like the bad guys. That's like not itself that much evidence, I think. But I also observe that like, I don't know like the people around me who are the most fanatical are like, in fact not the most productive or like doing the best.

And in fact, I think there's probably like an anticorrelation between those like another, another I guess point that feels like it's on the scales there. But yeah, I mean, I also think about like times when like the more fanatical group, like clearly caused a lot of harm. Like I think I, 2, 2, 2 examples in my mind that that come up a lot are like the French Revolution and like the Russian Revolution, where like, I think both of these were sort of like more or less driven [00:48:00] by like a kind of fanaticism, which like ultimately resulted in like both directly a lot of bloodshed and, and suffering like during the revolutions.

And also like after the fact inst like installed regimes, which themselves were like, I think. Like quite clearly bad like, yeah. Napoleon and the U S S R both seemed to me like they caused a lot more harm than like many other possible, I guess regimes might have. I dunno. And I think another one would, another example would be like like China, like Maton and and like various like cultural revolution stuff.

I guess it just seems like many recent historical examples to me feel like they sort of have this correlation between like high fanaticism and like worse outcomes. Whereas [00:49:00] like in the American Revolution, my sense is that like people were much less fanatical. They were sort of much more pragmatic.

They like, I think sort of still saw themselves as like, you know, roughly British people and like, sort of were just like, they were pissed off, but they weren't like fewer, fewer people were sort of like calling for like the blood of the elites or something. And I think, I guess like it just seems to me like that resulted in a better outcome.

Divia Eden: Yeah. So that makes sense to me in a lot of ways it seems like sort of the, the term that comes to my mind is like a genre savviness thing. It's like, I'll read the baddies, like that gift,

Ben WR: whatever. Yeah, exactly.

Divia Eden: And then, so, and I'm sympathetic to your point, if I imagine myself coming from a more utilitarian.

Point, I would say something like, okay, sure, but if I were, I would simply do the utilitarian calculus and count it as super negative that, you know, that all the people would die of starvation and therefore I would [00:50:00] not do that. Yeah. Which, I mean, I don't think you would particularly, you know, I think you'd probably be happy to have them taking that into account, but like, I, I think I'd like you to hear, I'd like to hear a more direct response to like, okay, but why not double down on the fanaticism and get better at calculating the

Ben WR: consequences?

Yeah. I mean, I do think that like, there is some appeal to that, like, especially insofar as like the world changes over time and we're sort of like in a new regime where like maybe we actually could calculate all of the consequences. Like, I don't know, maybe a hundred years from now we'll have Jupyter brains and like be able to like actually figure out what all the consequences are.

And I think in that world I'm like a lot more sympathetic. For now, I, I don't think that's the world we're in. My sense is that like, I, I don't expect to be able to do those calculations well. If I were to try I basically expect to do better, like by my own lights. By sort of like internally [00:51:00] appealing to my sense of like what's wholesome and like what is sort of like morally good in the sort of more, more traditional kind of a sense.

Like you expect

Divia Eden: that sort of calculation to sort of fail on its own merits, you think it Yeah, totally. To give a worse

Ben WR: answer. Right. And, and that's sort of like what my, my feeling is about what might have happened at ftx. I think that there's sort of like ways that you can sort of think that you are like maximizing the good when actually you're sort of like neglecting a bunch of things that like in fact will lead you to sort of like, in some sense predictably fail to do anything like maximizing the good.

And you could also, I mean, I think a, a a different way that you could see the Ft X thing, which is like counter to that is like, well, maybe they were actually maximizing the good, but like it they just were taking on a lot of risk and like the risk, you know, didn't pay off for them or something. And like, yeah, that is sort of always gonna be a possibility.

[00:52:00] But I don't know, I think it's at least some evidence that that kind of thinking like didn't pay off for them and, and probably won't for others. You mentioned the idea that if you're in a new regime, potentially you might change your mind on this, like a regime in which we have Jupiter brains or some type of advanced AI to solve this calculation problem.

Ben Goldhaber: Another thing that I sometimes hear arguing against the idea of certain universal moral precepts or ideas like morality running through history is like, well, we are in a different time period and a different environment where perhaps different sets of values end up leading to better outcomes. I'm curious if you think one that seems plausibly true about today and or two, like is there a, like new [00:53:00] environment you might anticipate where like you would throw out your like rule for honesty?

Like are there certain things you expect might be like faster to be like be pushed over the ship?

Ben WR: Yeah. Yeah, that's a really great question. I don't know. I think my sense is that like the current world is still basically composed of like regular humans, like doing regular human things, like existing in more or less regular human societies.

I think what I would say about a world where that no longer seems to be the case is that like if I were to throw out the sort of like. Like, you know, things like injunctions for honesty I would not know what to replace them with. That would actually be better. Like I think it would at least require some sort of like, period of ex of like ex of like, exploration either on like a personal or a societal level or probably both [00:54:00] to like try and figure out which things like actually would, would work.

Like, my guess is that like, it, it's not the kind of thing where you can just sort of be like, oh, well, you know, we, we no longer care about honesty because we're like, in this weird world, we instead like wanna just do the, just do the calculus straight up or something. At least not like confidently at any given moment.

Ben Goldhaber: We'll kind of switch tracks here and talk a little bit about ai.

You're someone who was like, Both worked in various orgs that have worked on some of AI and AI alignment, and also you're someone who's been around this kind of scene that has been thinking about it for a while. So, you know, first question, how do we solve the alignment

Ben WR: problem? Yeah. I wish I knew.

I, yeah, I, it it may be that I don't actually have that much useful to say on this topic. I

Divia Eden: mean, do, do your views on moral realism, [00:55:00] do you think they have any implications

Ben WR: there? Yeah, I, for, yeah, I, I do think that is relevant for like, questions of alignment. Like for example, I, I think at least in terms of whether I expect like an aligned AI system to basically make hiton or not, I think I basically expect it to not, right.

Because I, I don't actually think that's like what human values push towards. And yeah, I, I mean that's, that's like one thing to say about it. I, in terms of like aligning an AI system with human values, I do think that like there probably is a real thing like that is that we're talking about when we're talking about human values and that like, therefore, like in principle, it's sort of possible to align in ai which is maybe a thing that you might, that, that someone, some might disagree with.

I'm also not sure, like separately, I think [00:56:00] A lot of people talk about sort of like c e V or like coherent orated volition where like if you sort of take this as like, I don't know, like roughly speaking, it's sort of like if you had a lot of time and like, like a reasonable process for sort of like aggregating everyone's preferences into like what we would really want.

Like that's sort of like what we should be aiming for. And that's like not crazy to me. I think there's like a way that I disagree with sort of like an implicit claim in c e V talk, which is like that there is a single C E V that like that if you had a reasonable process for like, bringing together all these preferences and like sort of hashing it out like sort of like leaves a lot to I I guess like what that process actually is.

Like, I sort of expect it to be kind of path dependent, like what the like end result is. I basically just like don't see a particular way, a particular [00:57:00] reason to think that like whatever it is the human's value, like taken collectively, like that thing is like going to be well defined without sort of specifying more about like exactly how you're aggregating preferences and and so on.

Ben Goldhaber: And this is slightly different than, than your kind of view around this almost like baseline morality in some sense. Cause you expect that to be relatively. Maybe not well defined, but relatively broad spread, spread

Ben WR: among humans. Yeah. Yeah. Right. I, I basically expect like the, like it does in fact seem to me that humans almost universally like have some kind of thing like morality that they like, you know, have fairness intuitions.

They have like, you know, they sort of like scold each other for doing things that they think are bad, you know? Right. But

Ben Goldhaber: preferences in the whole broader range of human experience might be much more path dependent, much more varied.

Ben WR: Yeah. I think that's sort of right. Like, I, I don't expect, I don't [00:58:00] expect that, like that set of things, which is more or less universal to like, as I sort of said before, I don't expect it to even like, make like necessarily make recommendations about every action that you would take.

And so I also don't expect it to like, sort of make a particular recommendation about like what to do with the future like a single specific one. I do expect there to be like compatible ways of, of like handling the future and incompatible ones. And like, I think the way that I personally think about the alignment problem is like more like, how can we get the AI to like help us into a compatible future rather than incompatible one.

I haven't read the book Human Compatible, so I have no idea if this is about that or, or, or if that's about this or what. And so my guess, I'm not sure either.

Divia Eden: And my guess from how you're talking is that you think your, your views on moral realism. As you say that you think that they, they have some implications for like, if we had an AI that was going to implement some sort of reasonable [00:59:00] process, how that might go and whether that would be possible.

Mm-hmm. But it seems like you, my guess is you don't think that the AI would independently be like, oh, this is obviously what I should do and I Yeah, that makes sense to me. But can you explain, cuz I think sometimes people think, well, if moral realism is real, then the AI would come up with it and then we'll all be fine.

So why do you

Ben WR: not think that? Yeah, I think basically because the AI is not going to be a human and like, my sense is, and, and it's not even going to be like an organism that evolved in like in like a social context. I mean, it might in fact be something like that depending on like questions about how the future goes.

In which case maybe it would have something like fairness intuitions or, or, or something like that. I don't know. But if there were, if there were to be like a singleton, I don't see any reason that it would like happen to find the same sorts of like [01:00:00] the same sorts of like strategies for perpetuating itself that human societies have found.

And like, and, and as I said, sort of like the, the reason for me to like. B moral is because it works better. Like there's sort of like this, this, first of all, I like have these values that like are sort of independent of morality and then I like observe that like in some cases, probably in many cases acting morally actually helps me achieve those values.

Like more effectively. It's sort of like a

Divia Eden: capabilities boost for you.

Ben WR: Yeah, exactly. And it's like very unclear to me that it would be capabilities boost for like a super intelligent ai. In fact, I would guess that it wouldn't be, especially for Singleton. And I would pretty strongly guess that it, that most like human morals would also not be capabilities boosts for like sort of more multipolar worlds.

Ben Goldhaber: And so in general, do you feel more optimistic [01:01:00] about the multipolar worlds and or do you think we're trending in that direction?

Ben WR: I, I feel pretty pessimistic about all possibilities really. I don't like, even if, so, if you did have a multipolar world and they did inherit some like aspects of like what we would consider like morality or sort of like, you know, evolve that or like come like, you know, realize it independently or something I don't think that, my guess is that a lot of it is is pretty dependent on like.

What kind of species you're interacting with. And like, I, my guess is that they would, that they would potentially, they would potentially enact a future that was compatible with like morality relative to them and not relative to us. Like, I mean, you know, for example, maybe their future [01:02:00] has no humans in it because they like, don't especially care about humans.

And that might not itself be like, you know, that might not itself be part of the, like the, the thing that they come to come to adopt in a similar way to like, I don't know, like, like ants I think are like probably, I, I mean, sorry. I think it's like a little bit, it's like a little bit stretching to be like, oh yeah, answer super moral.

But I do think that like answer very social like, I don't know, they're like, you know, you social insects and they sort of like, basically their whole lives are like devoted to you know, the functioning of their, of their like local group. And humans I think basically don't have a term for ants in their, like, even though like many of our like many of the things that we think of as moral, probably like somewhat line up with like how ants behave.

We don't think of the ants as being valuable.[01:03:00] If that makes sense. And I sort of imagine something like that. Also holding in the case, like if there

Ben Goldhaber: were, there's likely a specious attitude in

Ben WR: morality, something like that. Something like that. Like part of it is gonna be because you have a collection of like agents with similar capabilities and those things are going to be like mortal patients in whatever, like social system arises.

I would expect and probably that's not all of it. Like, I think in fact it is probably like not moral to torture your ants, but it's like much more moral to kill ants than it is to like kill humans.

Divia Eden: Right. So you're sort of saying that your views on morality say that it'll be, it'll you, you, you follow your moral system because you think it works better for achieving your values.

You expect an AI would do the same and you expect that there's some sort of action that's like, treat this as a moral patient. And whether to [01:04:00] do that is kind of contingent on whether doing that in general for that type of thing would help you achieve your values more. And you, you don't think

Ben WR: that would happen with ai?

Yeah, that's basically right. And it is, it's not always going to be like your values. I think there is, there is definitely gonna be a component here that's sort of like more like slave morality or something where like I think it probably is gonna, like, part of morality I think is going to be about purely like preserving the systems that you're a part of and not like necessarily your own values.

And so I think that's, that's like maybe another, another component of it. But but yeah, I think otherwise that, that was basically right. Yeah. Do you feel

Divia Eden: good about the, the slave morality component personally? Like if you could sort of separate it out and know, okay, well this is something that I'm doing that'll preserve the system I'm a part of, but doesn't actually enact my values.

Like Yeah. How do you relate to that?

Ben WR: Yeah, I mean, I think if I, if I knew how to reliably separate that out, I, I think I would want to and mostly discard the things that are sort of more like only about preserving the system [01:05:00] even in ways that I like, don't endorse. I think it is gonna be hard to do that without like doing all the rest of step two of like figuring out yeah, like, just generally what's up.

And so I'm kind of hesitant at the moment to like, you know, unless I have a really strong story for why I think is sort of like more like slave morality like to discard things like that. This is like another calculation

Divia Eden: problem basically. Yeah. To try to separate this out. And So you wouldn't go

Divia Eden: there?

Ben WR: It's, it's like very structurally similar to the thing about like, why not just like add up all the consequences. And like under what circumstances might you like change your mind?

Divia Eden: mean, the other, the other place where the personal philosophy, I would think intersex with the AI thing is there's this pat, it's been a lot on Twitter recently, which I, I almost hesitate to even repeat it, but people will be like, well, if you really took AI alignment seriously, obviously you would then do all of these horrible, you know, morally [01:06:00] horrific things.

Yeah. So do you wanna respond to that from your, from

your stance?

Ben WR: Yeah, I mean, I think the easy answer f from my point of view is like, no, those things are bad. And like, it's usually not a good idea to do things that are bad. I mean, and, and like in proportion to how bad they are. And they seem like often extremely bad.

Yeah, sure.

But, and then I would say, okay, usually, but this is some out of distribution event, supposedly. That's the claim.

Yeah. I mean, it does, it does push me toward like slightly more extreme policies. Like I think, I mean, so Eli's like, like you know, article and time is I think an expansion of like the normal sort of for foreign policy world where like, you know, yeah.

Like you would also plan to enforce this this ban even against nons signatory countries. And I don't know, I mean that, that, that is, that, that does seem to me like kind of extreme and like in most cases, [01:07:00] like, would not be called for, like, would not in fact be worth it. And I think, you know, if you were to consider that aspect in isolation, I think it's, it's not good.

It, it is like, you know, it's the kind of thing that like is kind of immoral to do. And nonetheless, I am pretty compelled that it would be good to do in this case. But I think that's like, there's a pretty big difference between like, insofar as it is possible coming to a global consensus that that's the right thing to do.

And then carrying it out versus like, a lot of the things that I'm seeing on Twitter are sort of much more like vigilantism and like you know, sort of like recklessly going around, like murdering people. And I'm like I don't think, I don't think that's good. Kinda like, like you want some kind of like, Rough consensus, open source, almost like that ethos of how you govern something where it's like, maybe if we get 80% of the way [01:08:00] there, it's like, probably all right.

Yeah, I mean I, yeah, I'm not sure exactly how to think about the consensus aspect, but I do think that like a, a procedure which like involves going around and trying to like, get people on board with this plan as much as possible and also making very clear like what exactly your policy is so that people can like, steer clear of it seems like way, way better as like, as a procedure.

Like I think it, it pretty clearly for me pushes it from like, that's insane, obviously stupid bad, like policy to like Yeah, I mean I, I, it seems very reasonable given the stakes.

So you would

Divia Eden: say something like, okay, it is an extreme, it is, you know, potentially an extreme out of distribution situation, which does push in the direction of doing things that have downsides you normally wouldn't consider, but certainly not infinitely. [01:09:00] So Yeah, and you would still sort of take the normal costs and benefits into account a lot.

Ben WR: Yeah, absolutely. I, I also think that like there's a way in which like seeking consensus is a way of avoiding a lot of the like, sort of failure modes of sort of individually like calculating wrong. So like if, if I thought, oh, you know, what we should really do is bomb all the data centers, I'm just gonna like go out and do that.

Like, that seems way more clearly immoral than like, like maybe what we should do is bomb all the data centers. I'm gonna like, try to get the, like, governments on board and also first of like, also try to get people to like shut down the data centers first. Like, just like that process of like sort of gathering, like there's like a way that that sort of like, I think can diffuse a lot of the failure modes of trying to do the calculation yourself [01:10:00] from a wisdom of the crowds point of view.

Yeah. But also I think like in, in terms of like I think it, it just is gonna be a component of, of sort of like how moral a thing is, like how much agreement there is about it. Like, I don't know, I think it's, it's totally fine to make a, to like ask someone to give you their bicycle for the afternoon.

It's like not fine to like steal their bicycle and bring it back later.

Divia Eden: Okay. So I, I'm trying to sum up in my head how you're relating to the. So the moral, what should I call it? Is it moral physicalism? I think that was a term somebody, yeah.

Ben WR: I mean, the, the thing I'm, yeah, it's sort of a part of like a broader like, sort of metaphysical shift that I made recently, which I've been like referring to as physicalism.

Okay. I, I don't know, like I, I think it is sort of like quite tightly connected to like philosophical physicalism. Can you say,

Divia Eden: now I'm interested in that. Can you, can you first say a few words [01:11:00] about that shift?

Ben WR: Yeah, yeah. So I think, yes, so, so for a while. Yeah. Hmm. How do I say that? So yeah, I guess this observation that like everything that's happening with people is sort of like the result of physics on like a lower level is like, or sorry. Well, probably, I guess I should say probably the result of physics on like a lower level is like a reframing of a lot of different questions that I had.

So, so, so meta ethics is like the most obvious and sort of like the most relevant for a lot of things. But also sort of like the question of like what is real, which is sort of also related to the moral realism thing is like, Somehow feels much more clarified to me at like now than it used to where there's sort of [01:12:00] like physics is like the fundamental stuff, and then there's sort of like real stuff, which is like more or less like things that have a particular relationship to the physical stuff which are not necessarily, like, it's not correct to say that like that glass of water isn't real just because like the glass of water isn't sort of like a well-defined physical like, set of things.

It's sort of like fuzzier it's still like clear to me that like it's, it makes much more sense to think about the glass of water is real. And I think that vague insight also like applies really directly to like the question of like free will. And like how, so it gives kind of a clean story for like how like if real things are sort of like structures or like sort of fuzzy structures built out of the physical stuff I think it's totally reasonable to me to think in terms of like free will being [01:13:00] real despite sort of like being built on a deter deterministic sort of physical substrate where like it is describing a particular phenomenon in humans that like we make choices.

Like there is something happening that like results in us making choices. And if your like philosophical determinism denies that, like you're wrong. And like, I think it's just sort of like, it, it makes about as much sense to talk about will and free will in making choices as it does to talk about a glass of water.

Like I think it, it's, and can, can you

like

Divia Eden: cash out? Like how does it, what are the ways in which it makes sense to talk about either? I can imagine, but like what would you say to that? Yeah.

Ben WR: So, okay. I guess one thing that, one of the ways that I'm framing this in my head has to do with sort of like isomorphism between different parts of physics.

So if like so it happens to be the case that the physical world we live in, like has a lot of structure. And it's sort of like this very strict pattern where [01:14:00] there are lots of different sort of like like I guess different parts of physics, which are isomorphic to each other. And if you've read the, the, the ELIEZER sequences post the Simple Truth, I think it, it sort of gets into, okay, but it was a long time ago.

Yeah. It's the one where there's like a, a shepherd and he's like, you know, trying to figure out like how to count his sheep and like he's got these rocks and he's like, he, he sort of like notices that like if he, if he puts one rock in every time a sheep goes through the gate and then like takes one rock out every time a sheep passes the other way through the gate, like he will have correctly counted like whether all of his sheep have left the paddock or not.

Mm-hmm. Like. This is sort of like, it's an isomorphism that he's like sort of constructing between the rocks and the sheep. Like the, the rocks are like telling him true facts about the sheep because of the way that like, the number of rocks and the number of sheep, like are connected. If that makes sense.

Yeah. And this is like [01:15:00] everywhere I think in physics and in like, I, I guess just in the world, like these sort of isomorphism and in particular sort of like useful isomorphism where like, you know, a map is sort of isomorphic to the territory, meaning like, I have this like nice little piece of paper and I can like, use the nice little piece of paper to like navigate in the real world out there because there's a dysmorphism between them.

Yeah. This reminds me, this, this is the thing I got from Anna Solomon was talking about it recently, the unreasonable effectiveness of math in the natural

sciences. Yeah. Yeah, totally. I think this is like a big part for me of like, what I think is going on there is like, math is about these like really, really good isomorphism and like towers of isomorphism, right?

Where like, you know, you can count sheep with rocks. You can like, I don't know, count fish with tally marks or something. And like there's sort of an isomorphism between those two isomorphism where like what you're doing in both cases is sort of like this counting thing. And like, I think you [01:16:00] can build.

As far as I can tell, like all of math basically that way. Which I think is, is another one of the major sort of like reframing for me from this sort of physicalist viewpoint. I think I was natively thinking often in terms of sort of like a platonic view.

Divia Eden: Oh, are you a mathematical realist now too?

No. So, well, so sort of, I mean, maybe, I don't know. I'm not sure quite sure what you mean by mathematical reals.

Ben Goldhaber: Can you all explain that? What do you mean by mathematical realists?

Divia Eden: I think what it means, and I, yeah, I, I'm sort of starting to like, imagine what your viewpoint on this might be. I, I think what it means is there's some question people like to ask of something like, does the math exist?

Even if there isn't a physical instantiation of it?

Ben WR: Yeah. Yeah. And I think like historically I would've been like, yeah, duh. Like physics is made out of math. Interesting. Yeah. But like recently I've been like, oh, but if physics, like, I don't know. I feel like that belief is in the similar category has sort of like the supernatural view of, of [01:17:00] ethics that I was talking about before.

Like, I have no story for how that could get into my head and be true. Or like the, the truth of it could have gotten into my head.

Yeah, though I guess like, you know, it's funny because when, normally when I think of like, is there at least, you know, is there any physical instantiation of this? I tend to think of things that are not.

Inside people's minds, but then hearing you say it, I'm like, okay. But then the fuzzy abstractions, they would count too, right? Yeah. As as the physical instantiation.

Ben Goldhaber: Totally. Yeah, exactly. And I think my, my sense is basically that like math is roughly like the most general sorts of, like, isomorphism like the things that sort of apply everywhere in the world and like, would apply under many ways that we can be uncertain about the world.

Like I have no idea how big the world is. Like, I, I think it makes a lot of sense to sort of think about infinitely large sets, even though I, I, I doubt the universe is infinite in terms of like space. [01:18:00] And like,

Ben WR: yeah, I think I'm trying to put this together in my head, so I'm like, okay, how, how is this infinite thing compatible with mathematical realism?

Well, maybe it's that if, you know, if the universe produces beings which observe it and try to make isomorphism about it, they're gonna predictably come up with these types of structures and then instantiate them in their own heads,

basically, right? Yeah. That's basically Right. Okay. And like, and in a way that like, is very scalable.

It, it, like if you use like the infinity is sort of like, like infinity in this view is just like basically a way of representing this like very clear regularity, which clearly would apply in the physical world no matter the size. Mm-hmm. Or something close to that. Yeah, I don't know. And this felt like super clarifying, like this general sort of shift to seeing like the, all of math as sort of like being more or less secondary to physics.

Right, right. Yeah. [01:19:00] Yeah, I do. I think, I think that was a particularly good point cuz I'm like starting to see some way in which it's like, all right, math is this unreasonably effective way, unreasonably effective isomorphism for patterns we see in the world. Maybe that's the primary one. And then your pointing out like, there are these other patterns that seem very effective, like these kind of universal, the moral patterns.

Moral patterns.

Yeah. No, it definitely ha like, I feel like I was getting there too.

Divia Eden: Yeah, so hearing you talk about the physicalism in general and sort of, I don't know, humoring me with the mathematical realism bit, I, I think I have a better sense of what might have been the shift that happened for the way you're thinking about morality. Where there, yeah. The, and I think I, maybe this is the same as what I said before, but like that there's something that's grounded that was not previously grounded.

And that there was some sort of bit that was there before that you're now like, no, no, I'm rejecting that bit. Cuz it seems sort of supernatural and that now it seems [01:20:00] like, like morality, not exactly, but it's more like just another thing that you can have thoughts about it. Yeah. And you could try to do sense making the way you would about other things and it's empirical and it's complicated and that it you ha that there's some sort of impulse for El elegance on the object level that you.

That you're, you mistrust a lot more now in part because, yeah, because I don't know, because there's, you do see some pretty elegant principles on the metal level for how to make sense of this, and they don't add up to something particularly simple on the object level.

Ben WR: Yeah, I think that's totally right.

And this is,

Divia Eden: yeah, it's, it's a shift for you and it, it has some implications for how you relate to this system around you in terms of the EA stuff and. I don't know, maybe rash. We didn't really talk about the rationality, meme plex and how it is with this. I, if you have any brief thoughts on that, I'm interested too.

Ben WR: I think there's a way that you could go kind of crazy. Like thinking about all of [01:21:00] the different mathematical structures that you like, might be embedded in sort of like reference VAs yeah.

That kind of thing. For sure. And like, I don't know, I think it's just better to like, I don't know, plan for, or like, and like, like sort of be thinking in the main line, which is just like, there is this world, it's kind of mundane you know, almost by definition of the word mundane. But like, It's just there, we have a lot of evidence for this is the word

Divia Eden: mundane?

Is that the etymology of that? Like the world?

Ben WR: Oh, I didn't realize

Ben Goldhaber: that. Yeah. Oh, yeah. Until you said that. No, I didn't either. Munus. Yeah,

Ben WR: yeah, yeah. And like, yeah, I mean, it's, it, it, it encompasses all of the like other crazy amazing stuff and like, it's amazing, but it is still sort of You know, it, it's just there and you don't have to, and in fact, it's a little bit weird and I'm not sure why you would appeal to like Sort of [01:22:00] crazy mathematical stuff.

Like I, I guess things sort of more like like techmark four or like belief that like all mathematical objects exist. Or like at least, yeah, at least without. Having some other kind of motivation within physics, like I think Eliezer and Benya at Miri, I, I think, I think do have a lot of this sort of like as background and then like notice that in fact a lot of the evidence about physics points toward like some kinds of infinity, like many worlds sort of like points towards some of infinity.

And like you, you do need to sort of like reason about that. And so I think there are sort of reasons to like invent Something like solomonoff induction or like, you know, sort of some of the crazier stuff. But like, you'd be very

Ben Goldhaber: skeptical of things like thought experiments around simulation hypothesis

Ben WR: perhaps.

Yeah, I think that's basically right. Like I, I mean [01:23:00] it's, it's not exactly like, it's not entirely like being skeptical of the arguments. It's more like, I don't know. Seems like I'm at least embedded in a particular physical world. Yeah. Maybe I'm also elsewhere, but like, yeah, like

it's,

Divia Eden: you can tell me if this analogy is, it makes any sense or has kind of gone off the rails, but like, When I try to inhabit your frame, I'm like, okay, so morality, let's say, I'm thinking of it as like a map of some useful fuzzy abstractions, some useful isomorphism.

It's meant to apply in my local environment to help me achieve my goals. And now I've take, I've like looked at my map and I've been like, okay, I'm gonna like, these lines seem pretty straight, so I'm gonna extend them out like 50 million times as long as they ever went. And this sort of looks like this tessellating pattern.

So like same thing there, like a bunch of analogous things. That seems to have some moral implications, and you're like, no, it, it doesn't

Ben WR: really, does it seem right? Yeah, I think that's basically right. And like, yeah, so, so I guess it seems to me like sometimes [01:24:00] I don't know, like, and this one I'm a lot less clear on.

I think, I think there's a, like a much stronger chance here that I like, just haven't thought as hard about it as like the rationalists who seem a little bit crazy on this access to me or something. But. I do have some sense that like the reasons that I previously thought that that was plausible, like no longer feel compelling like after sort of like having the shift, right?

Divia Eden: Because it seems like they were, maybe the reasons you used to find it plausible were something about something that now you consider kind of supernatural plus maybe some. So I want the object level to be elegant.

Ben WR: Right. It was sort of like, I mean, yeah, like I think it is sort of true. Like I could be embedded in a lot of things and like it's just kind of like, well you know like some of those I like could know things about and other ones I can't.

And like it seems [01:25:00] clear that there's this particular one that I'm definitely in and. Maybe that has, I don't know, maybe there are other, maybe there are other things too, but like, it seems pretty weird to me, like if people, like, I don't know. So like modal realism, like the idea that like, you know, all possible worlds are real in the same way like exactly the same way, if you like, without adding any sense of like, measure or like which ones are more likely or whatever.

Seems to me like it's sort of obviously has to contend with this problem, which is that like our world looks really consistent, like barely anything crazy happens. If anything, and like I think if Mortal Realism were true, like almost everywhere just looks totally crazy. What, what's

Ben Goldhaber: an example of the kind of crazy you'd expect?

Yeah.

Ben WR: I mean, like, I think I would expect to have memories of like elves popping into [01:26:00] my, like, like, like through my hand or something. Like, I dunno, just like anything you can imagine as being like, sort of possible in some sense. Like would be real.

Ben Goldhaber: Okay. Yeah. I think I'm a little lost on this, like conception of like, there's this view that like maybe, like any possible world is in fact likely to happen.

And so we might see these types of like random events like in Alpha appearing or like, I don't know, some, and we tend, as far as we can tell not to.

Ben WR: Yeah. I, it's sort of like, I mean the, the particular claim that I think is the most like. Hard to square with. My experience is like that they all have exactly the same amount of realness or like they're all real in exactly the same way.

I think if you're instead like, oh, well, you know, those ones are not very real or like they're less real. They're a little bit real, but like, I don't know, solemn enough. Induction says they're like way less probable or something that that's like less crazy. But. [01:27:00] Yeah. I mean that's, that is not in fact, the claim that philosophers are often making about modalism.

And I think it, it should be at least suspicious if modalism is true that that our universe is sore, you know? Yeah, exactly. Okay. So yeah, I think,

Divia Eden: I don't know, I think we've maybe collectively understood a least a decent amount about where you're, where you're coming from here. Thank you so much for sharing your, your perspectives.

I, I really like hearing it.

And I think, I think Ben, you, you have some questions about things that are, that are slightly different though of course, all things are related. So does, does it feel right to go there?

Ben Goldhaber: I think so. I thinks I think, yeah, that's, that's exactly right. I mean, and these might be a little bit more scattershot, like might just throw a few out there.

But I really wanted to hear more about what you're currently working on in the tools for thought space and how you're kind of

Ben WR: approaching this. Yeah, totally. So. I, about a year ago, a little bit more realized that I [01:28:00] don't know anything about the future. I mean, like I, I think I probably actually know more than like the average person, but I don't know enough to know what I should do.

And I, I think almost no one does. And this seems like a real problem because. There are a lot of scary things that sort of look like they're coming. And if we don't like, have models for how things will work I don't know how we can sort of survive as a species. And I don't know, I mean, that's all sort of like a, like a grandiose way of putting it.

But like, actually the, the thing that I most viscerally feel is like, I don't know what the, and you wanna fix that? And I want to, yeah, I wanna understand what's happening and like what's going to happen. And I thought about like how I would go about figuring out what would happen and I was like, okay, I think I like want [01:29:00] Rome except like, like a less bad.

This is

Divia Eden: Rome Research is a, a notetaking

Ben WR: push knowledge graph, note taking tool. We can put a look at your shelve. Yeah, exactly. Yeah. Yeah. It's, it's basically, I mean, it's a great idea like, and and I, I really like Rome. It, it's basically sort of like a, like a tree of bullet points and you can sort of like, I don't know link between different pages and things, and it's like really it's, it's very convenient to use.

Quick

Divia Eden: note. Our podcast notes for this episode were in fact created in Rome. This is a Rome supporting podcast.

Ben WR: Yeah, so the problem with it is that like most of the features, there are a bunch of other features that are sort of built on top of this, like nested list sort of system including some that are sort of like calculation E and database E and stuff.

But they're [01:30:00] like s in my, in my opinion, like implemented in kind of a haphazard way. And that's one, one reason that I didn't think I could do it in Rome. Another thing is I think any thinking on like, of things in this sort of general genre, like is gonna be very uncertain. And I, as far as I know, it's, I don't know, I haven't been paying attention because I, I, I, you know confession time switched to Loge, which is sort of a, a Rome clone.

But like the they might have added something like this, but, but my guess is not Like if you're familiar with Guesstimate, which is this a web app made by Ozzy Kuen which is sort of like it's like a spreadsheet, but with sort of samples from probability distributions instead of like individual values.

I found it [01:31:00] extremely useful, but not very scalable. Like if you're, if you're trying to make a large model in guesstimate it becomes pretty unwieldy pretty quickly. It's also not like, it's not easy to collaborate with people on models in Guesstimate. And I think both of those felt like pretty key issues there.

So I didn't feel like I could use guesstimate either to do the kind of thinking that I wanted to do. And so ultimately I was like, okay, well I'm a programmer. I know how to do things like this. I am just going to like go ahead and make this thing. And so the current idea which I'm temporarily calling calcs, C A L X, but I don't know if that may not stick is basically to have a similar sort of tree-like structure to Rome.

Where, and like, sort of similar like linking and stuff in between documents and things like that. Where every node in the tree gets a, a, like [01:32:00] basically a spreadsheet cell. That is also like like sampled from a distribution. So. Ultimately, like you can basically have a top level question, which is like, I don't know, like, what should I do about AI or something.

And then like you can sort of break that down into sub-questions and sort of combine your answers in like the, in the top level question. And you can sort of like have Basically this, it's not quite a probabilistic programming language because it doesn't let you at least the first version won't let you do inference.

Like, it, like it won't let you like, learn a distribution from data, but like, we'll let you have sort of this uncertain estimate which propagates your sort of like, known uncertainties through and can sort of show you like here is roughly What my other beliefs kind of imply about, like what I should believe about this thing.

And yeah, I don't know. I, I'm really excited about it. [01:33:00] I think yeah, it's, it's not probably the easiest thing to like visualize if, if you're just listening to me describe it. But it's, it's,

Divia Eden: I mean it's, I guess what I heard you say is it's sort of like a cross between Rome and guesstimate, and by crossing them it makes it more scalable for sort of tracking how the beliefs affect the

Ben WR: other beliefs.

Yeah. Yeah, that's, that's basically exactly right. It's also gonna be like like entirely collaborative basically. So, so also sort of like it'll be a lot easier than in guesstimate, for example, to like, use someone else's like, you know, like calculation that they've done in your calculation. And do things like that.

Or like sort of maybe, you know, treat things more like Google Docs as well. Is this something people can

Divia Eden: play with or not yet?

Ben WR: Okay. Not yet. I have like a couple of terrible screenshots of like the, I've, I've built the sort of core logic engine of it, but it's, right now the user interface is just like a command line interface on my laptop.

So it's not it's not quite to the point where[01:34:00] where people can use it. But this week I'm. Planning to go ahead and make the server for it like so that I can then go ahead and build the actual web front end. And then, so maybe two, two or three weeks from now, I, I hope to like have a

Ben Goldhaber: prototype.

That's exciting. Do you imagine doing this on like many of the questions you're facing day-to-day or, yeah. Yeah,

Ben WR: absolutely. I mean, and I, I have used estimate for a lot of questions like day-to-day, like like, which job offer should I take?

Or like, you know, should I like apply to this thing or whatever. And then and I found it really useful for that kinda thing. Oh yeah, sorry. If you,

Divia Eden: if there's something, you know, that you're willing to walk us through a little bit about, here's how you were thinking about it before you put it in guesstimate and then afterwards.

What was it like?

Ben WR: Yeah, I mean, so a lot of the time, so I mean, before I knew about Guesstimate, I would like have spreadsheets that would sort of be like, okay, well here's my like estimate for X and like here's like, you know, my estimate for Y and here's how they should like combine to like, give me an estimate for Z.

And [01:35:00] that's like, I think pretty useful and like I've gotten a lot of mileage out of that in my life. But It's also potentially really misleading if you're like, using these point estimates because like, if you like, the point estimate is probably gonna be like your mean guess or something, or your average guess.

And like that is not, like, it's not a good representation of like what your actual uncertainty is. Like it, like what your error bars are roughly. And you can, if you're like, you know, sufficiently anal about it, you can like go through and like, you know, make a hundred samples in like a hundred rows and like do your calculation across the spreadsheet.

But like, it's just kind of like, I don't know, it's just like a, a, a pain and I never actually did it in practice. And I, I don't know if anyone else did, just, I bet a lot

Divia Eden: of our listener know listeners know, but can you actually explain what a point estimate is in case people don't?

Ben WR: Yeah, sure. So, so basically if I have some uncertainty about some values, so say like, I don't know maybe I've got [01:36:00] a friend and I wanna know his height.

Like, I don't know, a point estimate would be like he is probably about six feet tall. It's like a single number, which sort of like represents my best guess for like some uncertain value. Whereas like the distribution itself that I have, Like that better represents that uncertainty. Might be like something, kind of like a normal distribution that's sort of centered around six feet.

And like has some standard deviation, which tells me like sort of how much, how much variance is like there is in my, in my estimate. Like do I think it, he's a similarly likely to be like six feet and like like, or, or like six feet and like six feet, five inches or like six feet and six feet, one inch.

And so like, those are like very different like, like uncertainties that I might have over his height. And the wake estimate works is basically instead of having a single value[01:37:00] like the six feet gas, it sort of takes a sample. It it takes many samples from the distribution that you tell it.

So if I said, you know his height is like, A normal distribution centered at six feet with like, you know, a standard deviation of one inch. It'll like, you know, take, you know, thousands of samples and then propagate those samples through the same co computation. And then at the end I get to see this sort of histogram.

And like other facts about the distribution, like, As a result of sort of seeing all these samples which yeah. Cool. They just sort of like all, all of the calculations get applied to the distributions. Mm-hmm. Yeah. So then how, and

Ben Goldhaber: you're not losing information as you are like cutting off the tails and just seeing the mean or something like

Ben WR: that.

Divia Eden: Yeah, exactly. And this is like, yeah. Again, if you could describe like how this. Has made you see potential decisions differently?

Ben WR: Yeah. I mean, I think it ma it sort of like removes a lot of the illusion of like certainty which I think is really [01:38:00] valuable. So yeah, I mean, one particular example is like thinking about.

Is thinking about like which job should I take? Like, I think in 2016 or something, I was like, considering whether I should work at Cruise or I think it was like oh shoot. What was the, the other competing offer was like Flexport, I think. And like, At the time, I had like a couple of different things that I cared about and I wasn't really sure how to like, combine them like I cared, like cruise was gonna be better for a lot of things.

At that, at that time I was already a little bit into like AI safety and like wanted to get more experience with AI stuff. And so Cruise was gonna be better on that access and I wasn't really sure how much, or like, how, how helpful that would be. And I also was really uncertain about like the compensation between the two.

So like, I don't know. Flexport had given me, like, it had had an offer with like a lot more equity and crews had like a lot more like salary. [01:39:00] And so like I tried my best to sort of like figure out what I expected, like in terms of uncertainty, like, you know, the valuation of the companies to be at the time, like when my shares would vest and so on.

And like, I mean, I don't know, I think it's, it's just really useful to be able to see what that like results in when you like, convert it into like your distribution over like total compensation over time. And then you can sort of like, Take that distribution, and then you can also have like some other crazy distribution over like how useful this is for AI safety, which is like a pretty weird question to ask.

But like, I don't know, like at least in that case, you're not sort of deceiving yourself that like, you know, like, I, I think a lot of the time before using Guesstimate, I would sort of have You know, like pro con lists and then I would like add up all the pros and add up all the cons and like, oh yeah, well there's like six in this column and three in that column.

[01:40:00] And I think it just sort of like it's all sort of fake and it's not always obvious that it's fake. And I think it's more obvious if you're like adding, if you're like describing like, Oh yeah. Well this is my guess, but like also I have no clue.

Ben Goldhaber: One, one thing I, I've found that can be really hard with large estimate models for that matter, large spreadsheet models is something like comprehensibility and like ability to like come back to them.

Like some sense of like, yeah, you get like of all these different cells and then it's like maybe it's good. Totally. Yeah. Is is some hope with your tree structure layout that is more.

Ben WR: Reusable. Yeah. Yeah, absolutely. I think, so one thing for me is like, as a programmer, I have, I've, there's like the structure that most programming languages have which is almost sort of like Rome in that like everything is sort of tree shaped.

You have like, you know, expressions which are themselves made up of other expressions. And this is a really great way to [01:41:00] organize like complicated calculations. And spreadsheets don't really do this. They're like yeah, nope, you've got this like 2D grid. Sorry, I'm gonna take a sip of water. Yeah, hydro homies. So yeah, so I basically do have this intuition that like, This tree structure also like from Rome that this tree structure is like really, really good as like a way to organize like sort of complicated questions and like think about them just to kind of like start from a broad thing at the top and like.

You know, dive down and like have sort of these collapse. Mm-hmm. Mm-hmm. Sub-questions or, or like little bits of extra information, which you can incorporate if you want and, and so on. I, I also think it's important, it feels important to me that like every cell has a, like, also has like a text [01:42:00] bullet next to it.

So like the default. Workflow I'm imagining is sort of like you build a tree, which is sort of like describing your uncertainty in English and like very probably because it's like, well, I don't even know what units this is supposed to be in or whatever. And then you can sort of iteratively like starting from the bottom or like sort of like the most concrete, simple questions to answer.

Yeah. You can sort of like, Work outward and sort of make it like, sort of figure out how to combine the sub sub elements at, at each level. So yeah, I mean, yeah, that's definitely, I think comprehensibility is like a huge part of it. Yeah. Great.

Ben Goldhaber: Yeah, I'm, I'm gonna throw something out here, which feel free to not pick [01:43:00] up at all. It might not be interesting or, but like, I, I've also been really interested in forecasting for a long time, and one like thing that kind of comes up in certain parts of the forecasting space a lot is like, well, what decisions have really been changed by some of these forecasts?

Like, you hear this with like prediction markets a fair bit. Like, all right, is this info actually going to change somebody's decision? And I feel like one thing I'm kind of catching from your description of Calc X or is, is like some hope that many of these decisions can get changed if it's like, if it like starts at like the, at like the person level.

Like if you make them better at thinking about uncertainty as opposed to like creating some kind of external system. Is that, is that right? And also like you disagree with me That, or do, do you agree or disagree that like

Ben WR: it's almost, almost, I definitely agree that like, That like a lot of the forecasting stuff that happens, I'm not totally sure how useful it ends up being.

Especially sort of the, the more like sort of public forecasting [01:44:00] stuff. Although, I mean, I don't know. And I do think a lot of the sort of more public prediction markets have like produced a lot of value. I. Or especially like the ones that are sort of high volume, I think partly because those are like, you know, the interesting ones.

But yeah, I, I do think there can sort of be this like disconnect between like the people who are making the decisions and like how they're making the decisions and like the people who are doing the forecast and like how they're choosing which things to forecast. And it does seem really valuable to me to sort of like connect.

Like the, like sort of the agency with the forecasting say a little bit more to the agency with the forecasting.

Like Yeah. Like, I mean, so I think like me choosing a career, it feels like it's pretty tightly connecting like the agency Yeah. With like, yeah. You want it, like, I have this particular decision Yeah.

That I have to make. I'm like, yeah, I'm gonna, I'm gonna use this to help me make that decision. And it's like, there, there's no, like, there's no wasted motion. I guess. I, I

Ben Goldhaber: strongly agree with it. I've like felt this in my own life a fair bit. Like the way in [01:45:00] which like, all this stuff seems really fake until like, I actually just need to figure out if I'm gonna move to city A or city B, and then it's like, oh, all right, this is a little more helpful.

I

Ben WR: care. Yeah. And it's a little bit weird. Like I, I think there's actually some evidence to me that sort of feels counter to this, which I'm confused by, which is like, In science, it seems like a decent amount of the time. There's just some guy who gets really into categorizing all the rocks and he just like goes around and categorizes all the rocks and like, doesn't have any particular reason for doing it really.

And like it's just sort of his special interest. Yeah. And then later that's like extremely useful and it's like, I don't, I don't really understand how that works. And I think it, it's like it's pushing me a little bit toward thinking like, ah, maybe I'm just wrong. And like, actually people. Doing, like following their random interests and like produ producing artifacts is good.

Even though yeah, though those don't seem totally

Ben Goldhaber: opposed to me. [01:46:00] Like, I don't know, I'm like pretty, I'm, I'm pretty stoked by the like, random person going out and doing that. And also the random person super into forecasting in some way. Like I, I don't know, that doesn't seem like the same type of opposition.

They almost seem a little bit more tightly. Nearby. Like there's both some kind of purity of they just want this thing. Yeah, yeah, yeah. I mean, that's fair. Yeah.

Divia Eden: Yeah, I agree with that. I think there's, there's something where like the, the guy categorizing the rocks, like we maybe don't know why he cares, but he does care.

Whereas I don't know if, I'm trying to like bet on who's gonna win the midterms. I think that's sort of, there's something about that that is missing that is there with both the rock guy and you trying to figure out what to do next.

Ben WR: Yeah, yeah. Yeah. That seems maybe right, although I'm not sure what it's well, I mean

Divia Eden: with me betting on the midterms, like I, there's just more of a disconnect.

Like maybe I wanna make money on prediction markets. Maybe I [01:47:00] wanna like show my friends how. Cool.

Ben WR: I am. Yeah. It's like about, it's about the rightness and

Divia Eden: maybe I even care who wins the midterms, but it's not under my control. So there's not some like Right. Rapid feedback between like if the dinosaur guy, sorry, I keep saying that because there's that meme.

I keep thinking that because there's this meme about the guy that just really wants to do something with dinosaurs that I think of him as like what you're saying with the rocks,

Ben WR: but we'll definitely link this in the show notes. Yeah. Like

Divia Eden: if that guy is thinking about the rock things, that it, it is tied to his agency, like he's gonna go search for rocks in a different area based on his theory about rocks or something.

Like, there's some sort of feedback loop that I, I think is at least much harder to get if I'm betting on the midterms.

Ben WR: Yeah. That's interesting. Yeah. I'm still a little bit confused though, like why does he care or something. And like, I don't know, like what is causing him to care in a way that like is somehow predictive of what other people will find useful.

This gets back

Divia Eden: to the unreasonable [01:48:00] effectiveness of mathematics in the natural

Ben WR: sciences. Oh yeah, no, that's actually a really good point. Okay. So like I I, if I, if I understand where you're going with that it, it, it's something like like people. Inexplicably have these like special interests or whatever like basically because that helped in the past or like the, the like the process would produce these special interests, like are useful?

I do think so. I think, yeah, I think

Divia Eden: there could be something like that. I think, like, it reminds me of some stuff that Seth Roberts said about how he thinks that there's maybe some sort of cultural or genetic evolution towards people liking artisanal things to like to, because it helps with technological progress.

I don't know if that's, I dunno if that's true, but it definitely reminds me of that. I also think, I think these days I tend to not reach for evolutionary explanations as much. Not that I don't think they have value, but I also think sometimes people have either some weird combination of neuroses or aesthetics or whatever, where then [01:49:00] it's like, I don't know, it's like there's some itch in their minds.

And then I think because of. The sort of unreasonable structuredness of the world, like whatever itch in somebody's mind is gonna be isomorphic to something interesting in the territory a lot of the time. Huh? Yeah. Interesting. Not necessarily for EV evolutionary reason, but because abstractions kind of line up it seems like.

Ben WR: Yeah. Let me see if I can think about that a little bit deeper. One sec. Like, I can sort of imagine that like people's minds just happen, like are structured because it's useful in such a way that like the things that are even there in your mind to get obsessed with, like are sort of worth getting obsessed with.

Is that like closer to the thing? Yeah.

Divia Eden: I mean, I don't know that they're like, certainly I don't think they're always worth it,

Ben WR: but like, yeah. Yeah. But like more likely to be, or like, yeah. Or am I hearing

Ben Goldhaber: some way in which you like trust the instinct or the impulse? You get obsessed about a thing. [01:50:00]

Ben WR: I,

Divia Eden: I per, I trust it for a number of reasons.

Partly because, and this is sort of like a separate issue that is maybe, maybe more what you're saying about the useful thing where like, I think that insofar as there is this tight feedback loop between what people are doing and what they care about, they can produce outsize impact. And so even if they're like, well, I, maybe this is like I'm taking a.

Like, I'd have a multiplier of, you know, 10 x if I'm working on the thing that seems useful, maybe my actual ability to do it goes down by even more if I'm not interested in it. And so in some ways it seems pro-social because if people wanted to maximize their personal impact, they might, they might try to steer themselves more.

But if people have a, like a more hit space model of people doing what they're interested in, maybe like. Everybody do what you're interested in is a more promising societal strategy than like, okay, everybody tried to do the most important thing. I think, I don't think it's super clear and probably some mix, probably not quite that simple either.

Like, I think it's a sort of naive

Ben WR: framing, but Yeah. Yeah, I mean, I like it. I, I think [01:51:00] it fits really well with my like personal like experience of just like when I ever do anything worthwhile like I think a lot of the time. Yeah, like a lot of the time when I've tried to do like the thing that would be best or something like in an abstract sense, rather than trying to do the thing that I feel excited about it, like just basically goes nowhere.

And I do think there's some

Divia Eden: real calculation problem there about what is

Ben WR: best. Yeah. Yeah. Like, I, like, I think I am making a mistake. Yeah. Yeah. And yeah, that seems like, yeah, the team's very like very apt or something for, at least for me.

Ben Goldhaber: Makes me even more excited for eventually playing with Calex just because it, I don't know.

Sounds like you've gotten obsessed about it and there's a bit of a, like a tour thing going on.

Ben WR: Yeah. I'm also, yeah. Separately, I've been really obsessed with this thing called C RDTs for years now, the conflict free replicated data types. Yeah. What's that? And It's just [01:52:00] like a way of building applications such that like they can be easily turned into distributed systems.

So like, so if you wanted to build Google Docs but have it be end-to-end encrypted, so like the Google server couldn't see your doc you can't do it the way that Google Docs is built. You have to do it in such a way that like your browser, like the client machines can do all the conflict resolution themselves.

This is a super I don't know. Not, I don't know. It's, it's way in the weeds. But well be a

Divia Eden: segue to one of our next topics. I think you

Ben WR: should keep going. Oh, cool. Yeah. So, so basically it's just like, it's basically if you have your, like the state of your application like fit into like this, this particular like mathematical, like very simple mathematical structure called the semi lattice.

You can Easily, like basically trivially solve all of these like, very hard distributed systems problems, like, you know, accidentally getting the same [01:53:00] message twice. Or like things that normally would sort of cause like hiccups in in your application when multiple people are editing it or, or, you know, interacting with your application.

Like you instead can just deal with them gracefully. And anyway, so. I have basically finally, I finally like have this project where I like, actually can use this, like, really amazing, like, I don't know, like cool trick. And like, I don't know, I, I'm really excited because I think, I think I am gonna be able to make this tool like end-to-end encrypted in a way that like most similar tools can't be by sort of like using this I don't know, uncommon, uncommon way of making the thing.

Divia Eden: No, sorry, I I think that's, I actually, I, I wanna eventually try to tie that back to some other things, but I, I, I'm also gonna try to tie it forward because we, we had on our list to ask you about secure d n and I might be [01:54:00] overfitting to say that it seems a little bit related to what you just said, but I, I'm, I'm gonna try anyway.

Ben WR: Yeah, I mean, I think aesthetically it feels very related to me. Like, I mean, so security a is like, is building basically this like system where like basically for screening orders to like gene synthesis labs Sorry, let, maybe I'll start, start from further back. Okay. So there are these companies which can synthesize DNA n a for you.

You send them a sequence of like, you know, bases, nucleotides, like a G C T. And then they send you like, like synthesize d n a that matches that sequence. And this is really great. It's like, I mean, it makes lots of like biological research a lot easier. But it's also a little bit scary because, you know, many [01:55:00] viruses are basically just made out of nucleotides.

And so you could basically just make like a pathogen and potentially like a, a, an unusually dangerous pathogen like by sending an order like this. And so there's this question of like, how can you Basically avoid, like how can these these synthesis companies avoid making the next pandemic while like preserving the privacy of their customers.

And like, you know, like without also leaking the, like, the list of pathogens or the, like the, the like leaking the information that would allow someone to like figure out what the next, and by that you mean is like having

Ben Goldhaber: some kind of like public list of the things that you're not allowed to order.

Ben WR: Yeah. I mean, and there is in fact a public list of things that you're not allowed to order. We'll link it in the show notes. Yeah, it's [01:56:00] called the Australia Group. But it, it, like, this is so, and, and in fact that is what like the first iteration of security A is targeted at is, is preventing people from ordering things that are like known hazards.

But a second iteration is gonna be targeted at what they're calling emerging hazards. So basically things that are not, like publicly, the sequences are not publicly known but like are important to screen against anyway. Like maybe they were things that were just just learned about And yeah, so basically like there is, there's a lot of like, I mean I think there's a lot of sort of this aesthetic similarity just in that like they're both sort of trying to sort of elegantly solve these problems with like privacy and security and distributed systems.

And like, and the security a stuff. I should be very clear. I did not design any of the like, cool crypto stuff that is [01:57:00] like making it possible. It was all like you know actual cryptographers. But so this is something you're working on that's really, really cool. It, it's something that I, so I was working on security and a most recently as like a full-time job.

And so Yes. So basically I was the like, I guess first like programmer that they had hired to work on it apart from sort of like grad students. Sounds very important. Yeah, I mean, it was really, it's, it's a really cool project. I think if I thought that bio risk was a bigger deal than AI risk, I probably would've like just working there.

But I eventually was like, oh man, I feel like I should get back to the. The real stuff or something. No offense. I mean, I think it is real stuff for sure. But the stuff that is like realist to me or something [01:58:00] Something I was just, I'm just kind of mulling on about your explanation of what Secure DNA was, because I also was just kind of curious and didn't honestly know that much about it. Is the like

Ben Goldhaber: idea of systems that. Enforce certain norms or rules in a like multi-party kind of game, but like also are not just a strict like centralized, like I don't know, like top down kind of model. And I don't know, I'm feel like picking up a little bit on that like aesthetic thing that you're pointing at, like what a similarity is between that and the C R D T in some way in which it's like not a yeah, I don't [01:59:00] know.

It's not a single like, government enforced thing in the same way in which like you need to have a distributed kind of system to handle it. Are, are there any systems of this type that you're. Optimistic about, or that you're kind of thinking about within the thing that feels real to you of ai?

Ben WR: Yeah, I mean in, I guess in ai, I don't yet see this kind of thing or this kind of aesthetic, like represented very much, and I'm not totally sure how it might come to be more represented.

I mean, I, I guess there like, I don't know, sometimes I hear people talk about like every person gets their own sort of like ai, like personalized AI assistant or something. And like maybe you could end up with something that would have this kind of aesthetic that way. But I don't know, it sort of rings hollow to me to say that or something.

It doesn't really feel quite like what will happen. But [02:00:00] yeah, I don't know. I think, I mean, I think it, it's also, I think it's, I mean, I'm a little embarrassed to say this but like I think it's also kind of part of the aesthetic of like the crypto

Ben Goldhaber: world, like I was gonna say, and I didn't wanna utter the cursed words of blockchain Yeah.

And ai. But I certainly think there's some kind of like aesthetic thing, even if that sounds terrible as I say

Ben WR: it. Yeah. Yeah, and I avoided learning about blockchains for a long time because I had like a sense of like that whole world being like super toxic or something. But actually it's really cool and like aesthetically I love it.

And I guess, I don't know, I'm not sure what to do with that, but but yeah, it's, it's, it's pretty similar. Yeah, never never let the haters tell you what you should learn about or something, I guess. Yeah. That is a good, gotta follow

Divia Eden: weird, idiosyncratic interest in rocks, right? [02:01:00]

Ben WR: Yeah. I hope so.

Ben Goldhaber: Well, is that a good note for us to close on? I, I feel like we've covered a lot of the questions that I had. Divia, is there some that you want, any others?

Ben WR: I think,

Divia Eden: I think I wanna try a potential additional wrap up type move, which we can, you know, if it doesn't, it doesn't work, then, you know, cut it or something like that. But yeah, I guess, and so when I say that, I'm obviously sort of joking about the, the Rock guy and I, I'm also sort of really not, and. Yeah, I, I think what I, if I try to like sort of digest everything you've been saying since the beginning with the, with the physicalism and the moral realism and the tools for thought and the, and the secure DNA stuff, I think here's my attempt to sort of, I don't know, build a picture of how your mind is working in relation to these problems or something.

It, like, I think the thing I see unifying it is something like,[02:02:00]

And this is, this is gonna sound similar to things I've said, but that, that there's some impulse maybe that a lot of people have. I, I definitely relate to it, just sort of add an extra meaning layer somewhere and then kind of reify it in a way that is sort of goes with like a top-down type of thinking that has calculation problems.

And that this is, this is an issue with how people think about morality. That they're sort of added something and then they're like, cool, now that I added this, like morality juice, I can just calculate it when it doesn't really, doesn't necessarily work that way. And then similarly with the tools for thought, there's some way that I'm gonna be like, okay, cool.

I have a number now let's like that number is like my estimate. We're like making it special. And now we can like pretend like we're calculating something when we're not. And I dunno that I can fit this as cleanly into the secure DNA thing, but [02:03:00] like there's maybe some sort, if I were to like map that impulse to be like, okay, here are the dangerous things.

We're like putting 'em on a list and now we're gonna like, but where that's maybe also kind of there's some unifying aesthetic around like, no, no, let's like figure out where the elegance should actually go so that we can actually figure things out and it's not necessarily there and we can, it's not necessarily where our first instinct is to put it.

And by sort of de reifying that we can get something that's more robust potentially.

Ben WR: Yeah, I think that definitely I, yeah, that, that really resonates for me. I, I guess one thing that, to say to sort of riff on that a bit is that like, I think sometimes it actually can make sense to sort of like live in a fantasy world temporarily.

Like I, I think there's like a way that when mathematicians are thinking in terms of like the platonic realm that they're like, Eliminating one layer of like like one sort of spatial layer in their [02:04:00] brain of like things they have to track. And I think like to some extent, like I think, I think that's like a super valuable thing to do, but I also think that it's really easy to sort of like accidentally forget that that's what you're doing or not notice that that's what you're doing and to sort of like end up believing that that sort of collapsed version is the truth.

Yeah, that makes sense. And it's kind of, yeah. Yeah. And it's, it's basically, you don't

Divia Eden: wanna generate that activity, but you do wanna contextualize

Ben WR: it. Yeah, totally. Yeah. Yeah, I think that's totally right.

Ben Goldhaber: Lovely. Well, I think on that note, I just wanna thank Ben, thanks for again joining us and I don't know, giving us a chance to kind of understand.

Your worldview and I think, I don't know the world a little bit better. Yeah, totally.

Divia Eden: Thanks so much for coming on and for your

Ben WR: time. Yeah, this great. Yeah, I really appreciated it. Thanks a lot. I really enjoy chat chatting. Yeah, I don't know. We should hang out.[02:05:00]

0 Comments
Mutuals
Mutual Understanding Podcast
Seeking to understand the world views of our mutuals.
Listen on
Substack App
RSS Feed
Appears in episode
Ben Goldhaber