Mutual Understanding
Mutual Understanding
Shea Levy on why he disagrees with Less Wrong rationality, Part 1
0:00
Current time: 0:00 / Total time: -1:23:12
-1:23:12

Shea Levy on why he disagrees with Less Wrong rationality, Part 1

with a big section on philosophy and Internal Family Systems

In this podcast, Shea and I tried to hunt down a philosophical disagreement we seem to have by diving into his critique of rationality. We went off on what may or may not have been a big tangent about Internal Family Systems therapy, which I’m a big fan of, and which I think Shea thinks should have more caveats?

Unfortunately, our conversation got cut short because partway through, Shea got a call and had to deal with some stuff. We hope to record a Part 2 soon!

Transcript:

Divia (00:01)

Hey, I'm here today with Shea, who is my Twitter mutual, recently got a job doing software engineering management at Andro and is an objectivist. I think I've, listened to a long podcast with Shea about objectivism on Daniel Filan's podcast, The Filan Cabinet, which I recommend to anyone who's interested in that. We'll see how much we get into that today. I'm sure it'll come up. Our primary topic for today is, Shay sent me a long list of things and one of them was Contra.

rationalists. So I, if you check my Twitter bio, it says I am a rationalist. So I figure there's probably some interesting disagreement here that we can dive into.

Shea Levy (00:38)

Yeah, sure. Thanks for having me. Yeah, I guess with respect to the rationalists, I've been rationalist adjacent for lack of a better term for I guess over a decade now. I originally kind of became aware of it through Ozzy. So Ozzy was on, I don't even remember what the thing, there's something

Some feminist blog kerfuffle and somebody, a blog I followed, Provocracy, recommended looking at Ozzie's thing on this. And then through Ozzie, I then found the rest of the rationalist sphere. I was mainly on rationalist Tumblr for quite some time. I've been socially connected and intellectually engaged with the community since then in various ways. Some of my closest friends are rationalists.

Divia (01:35)

Like they say.

Shea Levy (01:37)

And throughout all that, there's been a kind of general fascination and love for some of the things and then deep frustration and then for some parts of the community, going from frustration into I think something seriously wrong, defective there. I don't know what's interesting to you if you want me to just give my spiel or if you want to talk, I don't know.

Divia (01:57)

Mm-hmm.

So something that comes to mind, I don't know how much this will relate or it won't. The first conversation I remember us having on Twitter, which may not be the first one we actually had, is talking about AI boxing. Does this ring a bell? Yeah, because I think, I don't remember, but I think I was like, okay, who wants to talk about AI boxing or something like that? And you were like, no, this doesn't make any sense because I just would simply not let the AI out of the box. And we ended up talking about it, though then it seemed like-

Shea Levy (02:17)

Yeah. Yeah.

Hey.

Divia (02:35)

This is probably what you're saying all along, but what I eventually understood you to be saying is, unless I think the AI should come out of the box.

Shea Levy (02:41)

Well, so, okay, let's start with that one. So I feel like there are a pieces to it, but let's drill into me personally, if I'm the one in charge of the AI Box. So like, will, will see that there are people you should not put in charge of that. But the fundamental thing that I don't buy in the AI Boxing argument is that there is some, like,

Divia (02:52)

Yeah, you were clear about that. You were not like arbitrary person. That's a good idea. Yes.

Right.

Shea Levy (03:10)

general like key you can turn in anybody's mind to make them believe anything illegitimately like You can I'm not saying I'm like some like you completely like never fall for any scam kind of thing. I'm not saying that like if somebody could trick me about things if I were like in a normal situation where I'm treating them as a like good faith human being that I'm interacting with like I'm certain certainly people can pull something over me I'm not my claim is not that

But my claim is if I know that this is a like potentially hostile intelligence and now let's set aside whether an AI could do that That's a separate concern I have but like yeah, if I know there's a potentially hostile intelligence. I don't understand how it works I'm well aware that it doesn't work like a human being even if I don't know the details of how it does work like there is not some magic sequence of words that it can tell me that

will convince me of something false relevant to letting it out. I just don't buy that and none of the arguments I've seen suggest that it's true and it all really seems to rest on the sort of like determinism, lack of like the impossibility of objectivity that like, like the AI could present a like picture that is an illusion.

It could present something that looks like something, but it's actually, you know, it looks like X, but it's actually Y and could confuse me visually. And I think there's this idea implicit and sometimes explicit, but implicit in the argument that like, reason is like that, that there's some like reason level illusion possible. I just, that's just not how reason works. So like, if I knew what the situation was, I know it's an AI in the box. I know that it's extremely dangerous.

I think it's possible I would let it out, only under the circumstance that it actually convinced me properly and like, because it was right, not because it came up with a clever argument. Like, I would know, I would set up controls beforehand and then reevaluate those controls if I thought they were being breached. Like, there would be a number of things that I could do depending on the specifics to like, actually confirm that the argument it's making is legitimate or not.

Divia (05:09)

Yeah.

Shea Levy (05:28)

And so like, yeah, that's basically my view. like, don't think that's something, I don't think it's, certainly not everybody in the population is true of, but I don't think it's something like distinctive to me that like, I've got some special skill here. I think like there's a certain degree of like objectivity and self-awareness and understanding of how, you know, how reasoning and how like persuasion works.

that you want to reach that point, like there isn't this magic button that a known adversary in a known adversarial context can compress.

Divia (06:04)

Yeah, ultimately I think you convinced me that that was basically right.

Cause I had, so as people may know, I guess I should say what AI boxing is. And many years ago, Eliezer Yudkowski was, and of course like the whole thing in some ways is moot because his argument was, this was about an AI safety argument where he was like, look, AIs are dangerous. And then at the time people used to be like, but we could put the AI in a box and not connect it to the internet. And then that would not be dangerous. And he was like, okay, but people would let it out. And people were like, I don't think so.

And he was like, I'll tell you what, I'll do an experiment and I'll role play the AI and then the person will let me out. And he did do that a number of times. I don't know if it was 100 % of the times he tried it. It might've been, but it was certainly a bunch of them. And he didn't say exactly how he did it because he was like, well, if I say, then people will be like, well, that exact argument would have worked on me, it was in the point and.

Shea Levy (07:01)

Yeah.

Divia (07:03)

And then I tried it too. I tried it with a friend of mine and in fact, he let me out as well. And so I was sort of like, okay. Yeah, and as it is, people of course have made AIs and immediately put them on the internet. like no AIs are in boxes as far as I know. Like maybe some hypothetical future AI could be in a box, but that's not really what people are up to anyway. But like it is interesting to me because I, because the first time I heard about LAO's are doing it, I was like, well, how could that possibly be right? I can't imagine anyone doing that. And then I was like, all right, I thought about

I think of some ways maybe I could do it and then I did it. But I guess it never really occurred to me, I think until talking to you, which seems like a pretty big oversight, that maybe some of those people should have let the AI out of the box. That there aren't actually zero cases where people could let, and of course it could be that it didn't more like he tricked them and, but yeah, there was this other possibility.

Shea Levy (07:53)

Yeah, I'm curious, you okay sharing publicly what you did or maybe if you could share privately after it?

Divia (08:02)

I just don't think it was, I'm like, I don't think it was that interesting. I tried to role play the AI and say the most persuasive things I could say, which in fact were me being like, yeah, you could like run these safety checks on me or whatever. But then I was like, but of course like I could fake those. And I think your point is like, well, maybe not though. Like couldn't necessarily fake that stuff.

Shea Levy (08:19)

Let's take an example that I don't think this is how you would solve the AI safety issue if there's an AI safety issue, but it's something that like if

Like, let's say it's an AI in the old mode that we, before we had weights and things where we thought we were going to write it in code, right? And we had a, AI, what the AI gave you was a formal specification, which is relatively concise that like a mathematical, that a like formal theorem prover could understand. And then a proof, and then like, you'd obviously have to like inspect the proof to make sure it's not like taking advantage of a bug in the theorem prover. Maybe you would give the same proof in multiple theorem provers and all of that.

Like that kind of thing, like there are certain aspects of that. Now I don't know if that would be enough. That wouldn't be enough by itself, but there are certain aspects of that. Like I would be very convinced. Yes, this code that is validated has certain really important properties that are relevant to safety. so like it, so yeah, like the, the, the level of thing wouldn't be like, you know, it would have to be a fairly robust.

piece and like, part of it would be like, if I'm, if I'm already somebody concerned about AI safety, part of what the AI in the box has to convince me of is like, it basically has to like, if I like, let's say I buy like LA's basic framing around like friendliness and value alignment, like it needs to solve the problem. It needs to solve the value alignment problem and then convince me it's solved it. And if it does, right, if it's super intelligent and this is a real problem and it's or prove that it's not a problem, right? But like.

Divia (09:47)

Mm-hmm.

Shea Levy (09:57)

not just, it wouldn't suffice to just say, you know, here's a list of common sense things that you can check and if you check these things, it's probably fine, right? Like, that's not the bar, but like, I know that's not the bar. And so, you know, I wouldn't let it out. Yeah.

Divia (10:12)

Yeah. Okay. So with that as some stuff in my context, you walk me through your anti-rationalist spiel?

Shea Levy (10:20)

Yeah, I mean, so I think it's a love hate thing. So it's not purely anti, I mean, yeah. I guess the first thing I'll say is like what drew me to the rationalists originally and still does is outside of objectivism, it's not all of objectivists, but outside of objectivism, it's the only group of people where a significant part of the subculture

exists and is driven by people who care about what's right and what's true and in a way that ties in with like modern understanding of economics and scientific development and things like that that like takes industrial scientific civilization seriously and add value even if there might be some issues with it. And again is like dedicated to understanding what's true and following that where it leads. So there are obviously non-rationalists who have that.

Divia (10:54)

Mm-hmm.

Mm-hmm.

Shea Levy (11:18)

but there's no other like community or like movement that has that as a distinctive characteristic. Other than objectives that I found. Historically, yes, but I mean, so this gets into some of the like philosophical cultural things I put on our list, but like, I think there is a general element in our culture that

Divia (11:18)

Sure. Community.

other than objectivism, you're saying.

Shea Levy (11:45)

put pits, idealism against reason and science. And I think the rationalists are one of the few groups of people who on an implicit level have combated that. And I'm emphasizing on an implicit level because this is where the bad side comes in, is I think the explicit ideas in so far as you guys have explicit ideas that you're kind of are coheering around, anyway.

Divia (11:57)

Mm-hmm.

Yeah, we sort of do, we sort of don't.

Shea Levy (12:14)

explicit AIS or co-hearing around that that are common, think ultimately undermine that interest. like, don't think... No, no, no, even setting aside safety because so like, think, I think some of the safety stuff is a consequence of some of the things I'm thinking about. But aren't concerned about safety have these issues. And it's basically like, I was thinking about this before before you got on the call and like, the there's a sort of like,

Divia (12:22)

Do you mean the AI safety stuff? Okay. Okay.

Okay, but you don't mean that.

Shea Levy (12:44)

What is the, I have a view on like what the fundamental issue is, but I think the high level it's, and like the sort of, the high level, the surface level of it is a kind of overly, there's an acceptance of a, like a view of the world that,

takes too seriously contemporary views on ethics and the nature of science and reasoning and then doesn't grapple with the issues that those ideas have. And so insofar as the thing that attracts me is people who take ideas seriously and want to actually get to the bottom of the issue, the thing that bothers me is that those of you who are more like,

explicitly philosophical don't seem to be bothered by the issues with your philosophical framework. And those of you who aren't just get letters stray by the issues. like the biggest sort of surface issue, which I don't think is the fundamental, but the biggest surface issue here is the ethics. I think part of the problem is that there is no consistent ethical framework. There's some, there's a sort of modal, there's a modal framework, but there's not, and like that's part of the problem. Part of the problem is

Divia (14:05)

That's true, yeah. I would agree with that.

Shea Levy (14:12)

not seeing these philosophical ideas as needing to be an explicit integrated system. you guys, tents at the level of the kind of culture you're trying to build, a broad tent doesn't work. Like, you need to actually have... It doesn't work for intellectual and value alignment. Like, insofar as your goal is to be an intellectual movement or even a social movement trying to build a...

Divia (14:26)

What is it doesn't work for what?

Shea Levy (14:41)

different way of living within the broader culture, which, know, part of it, like the rationalist project is not a coherent project. It's not a specific thing. So like we're sort of like working backwards from like, what are, you know, if you were to say like, what is distinctively rationalist or not? And like anybody would have legitimate quibbles with anything I'm including here. But if I talk about like, like the consequentialism and I'm going to use the AI terminology, but this is not really an AI thing. This is before even without the AI issues, this is the,

Divia (14:50)

Sure.

Shea Levy (15:12)

orthogonality thesis or, sorry? Let's take the weak historical, the human version of reason is a slave of the passions, basically like your, or to put it maybe in more rationalist jargon, like.

Divia (15:13)

Yeah, I don't buy it for what it's worth. I don't buy the orthogonality. Or sorry, there's the weak version and the strong version I could get into the weeds, but I basically don't.

Shea Levy (15:34)

You have your preferences and maybe you have preferences about your ultimate values, perhaps. It's not ultimate values, it's terminal values is the way you put it, right? And those are maybe for some strains of rationalists, they're evolved in completely. For some, it's sort of a mixture of evolution plus whatever culture you grew up in and experience or whatever. like those are not things you reason about. They're things that you have and then you use reason to achieve. And maybe you have

subsidiary values that you kind of reason yourself into causally that support those ultimate ends but like those are given in some way and then there's a so that's that I think is wrong, but then there's an additional layer on top of it which is and Obviously the ones most of us have and if you're one in this sort of stronger biological camp you say this is like genetic is a sort of like some some flavor of altruism Is sort of like we we were built in

It's built into us to care about other people and to, know, there's, Eliezer has this short story and I wish I knew what it was called, but it's like, it's like about some alien species that isn't at all altruistic and like they're, it's not three worlds collide, no, no, no, it's a different one. It's like, basically, a big part of the story is one of the like aliens evolves an altruistic impulse.

Divia (16:48)

Do mean three worlds? Is this part of three worlds collide? No. Okay.

Shea Levy (17:02)

But the alien species is kept technologically down because they're all egoistic in a very, I think, very short-sighted, dumb way. think Eliezer's not taking egoism seriously in this story. But the idea is that each, it's very genetic, selfish gene type thing of each individual cares about itself and then cares about its relatives.

based on how closely genetically related they are. And so there's sort of like planet wide warfare constantly. there's like very limited technological development. then the humans come along and they're extremely technologically advanced. And anyway, one part of the story is there's one sort of like genetic mutant in the tribe who's developed altruism. they're like, humanity has like adopted this alien into humanity. And like part of that story though, is that like,

Like the alien doesn't believe the humans that they're being altruistic. They think it's like some kind of lie. They're like, how could you possibly evolve this? And like the human is like, yeah, well, we, best guess is a sort of like lucky evolutionary path where like we evolved altruism in a way that could have been exploited, but wasn't. And then it got widespread enough that then it was able to sort of sustain itself. And then that's sort of what.

But like, anyway, a key part of the story is that like altruism is clearly in this view, like at least locally opposed to like this flourishing life and maybe it's only if you get lucky. Anyway, but the reason I get into that is like, there's clearly a view amongst many rationalists that not only is altruism my personal terminal value, but it is in fact like a human wide,

Divia (18:35)

Yeah.

Shea Levy (18:51)

thing that many humans, if not all, have baked in. And that being baked in is not questionable and is kind of a foundation of ethical reasoning. So first of all, I find that depressingly parochial because not all human beings have And it's only historically, especially historically. If you look at like

Divia (19:11)

because lots of people don't behave that way or even profess it. Yeah.

Shea Levy (19:17)

The first people who explicitly talk about ethics in the Western tradition, it's the ancient Greeks. there's a wide variety of ethical stances that they put forth, but basically none of them are explicitly altruistic in this way. You have elements of things that look like it, but nothing like, the goal was your own perfection and happiness, That is the goal of most of these virtual, ethical, sorry.

Most of these ethical theories. That's the goal of most of these ethical theories. And like some of them think, okay, well, that's not really possible. Happiness is not possible, but at least we can reduce pain. But again, it's reducing your own pain. It's not like reducing pain for the people at large. And I think it's a very Christian thing that comes up. So like that's the sort of most service thing. And I can get into like where I think that comes from. But like I think that is...

Divia (20:06)

Mm-hmm.

Wait, okay, so.

Shea Levy (20:12)

that's the area where like it ties into EA, but even before you get to like the weirdness of EA versus cultural altruism, it's really the altruism part. yeah, anyway, sorry, I've been talking.

Divia (20:24)

Yeah, okay, no, this is interesting. Let me see if I can back up and maybe try to summarize and try to see what... So unfortunately, I do have like a small sinking feeling that we might not have as substantive a disagreement as I hoped, which... But yeah, okay, I think we will too. Because I'm not really willing to defend the orthogonality thesis, like values are sort of arbitrary things that came from evolution or evolution plus culture. Like I will not defend that thing.

Shea Levy (20:38)

think we'll get there. I think we'll get there if we keep cooking.

Divia (20:54)

Okay, I will say a little bit, this doesn't feel like a central point, but like about the Greeks, some could be like, okay, but clearly they did care about some people somewhat, right? That weren't just themselves, not necessarily just for their end.

Shea Levy (21:08)

Yeah, but nobody doesn't, right? There is nobody who's claiming that like, it's the stupidest, stupidest straw man of egoism to say that an egoist literally doesn't have friends, literally doesn't care about anybody, right?

Divia (21:11)

Yeah.

Okay, yeah, yeah, no, and I don't want to spend time with that straw man, but I guess I'm like, but what, I think I don't fully know what you mean by altruism here.

Shea Levy (21:33)

So I mean, think the basic idea here is that like, who is the proper beneficiary of like, one way to think about, yeah, so one of the big questions that I any moral system should answer is like, moral principles give guidance to your human action. The question is like, who ought to benefit?

And like there's other questions like who ought to benefit from your actions and like the altruist stances like fundamentally not as a means to some other end not as a part of some bigger thing but fundamentally like the standard of value is the good of others now there's the utilitarian version where it's not the good of others it's the good of everybody including you but like you're one still one piece of it and I like I don't think that distinction is very important like it's

Divia (22:20)

But it's.

But is that what people are saying that rational... is this really part of rationality?

Shea Levy (22:38)

So I think it's, again, like, can quibble on what is and is not exactly part of rationality, and I'm happy to sort of have that discussion, but like, there's a strong, I think there's a strong, they wouldn't say that explicitly, so like, that's, I think for everything, yeah, okay, that's a point, and I'll flag that a lot of the things I'm gonna claim, like, many of them aren't explicitly endorsed in that term, but like,

Divia (22:45)

Well...

Yeah, that's what I'm asking about. I'm like, how can you see that? Like what are some maybe less obvious ways that you can see this as more part of the rationalist worldview?

Shea Levy (23:07)

So I think the, I think you see this in the embrace of utilitarianism, of like, what we're trying to do is do the most good, right? What does that mean? It doesn't mean build the best life for yourself. It means like, create a universe where happiness across the whole population or the whole, you know.

intelligent light cone, right, is is maximized in some sense and you're trying to, I'm like that's not everybody.

Divia (23:37)

And is this what, yeah, like to what extent is this a rationalist position? And again, like I'm not trying to make it, I'm trying to say some ask the substantive version of that.

Shea Levy (23:48)

Yeah, I mean, I think so.

I think insofar as there is a vision of the good, it is not always explicitly this, but it is almost always either something like this or something that, and I think these ultimately bottom out in the same thing, but, or a view that sort of externalizes the good in some way, excuse me, from yourself. it's sort of like, just like, it's not exactly this, but it's like,

It's something along the lines of like you could imagine like what would I have to do to maximize the mass of the earth right like that's something sort of like a Problem you could present and then identify that and like goodness is something like that. It's more complex, but like it's some like physical it's some not physical, but it's some like Characteristic of the state of the world that you are trying to maximize right that you're trying to say like

you ease the, you like, you see this in the sort of like, kind of reasoning about, like, some of the thought experiments about, if you could press a button and, you know, like, and so some magical thing would happen, like, which world is better? And like, as if that's a coherent question. And like, people's... Yes, yeah, yeah, but I mean, so that this is sort of going lower in the fact than epic. But yeah, so like, that, that is, I think, a big part of

Divia (25:16)

Yeah, I do tend to be a conscientious objector to these types of questions. Yeah, yeah, okay. I like your answer.

Shea Levy (25:26)

that kind of thinking about ethics. Basically, okay, here maybe here is the thing I would more strongly defend as central to rationalism, but have a harder time pointing out to like a lot of convincing examples. So there's a mistaken view about what science is and that like what kinds of identifications and especially quantitative identifications are properly considered scientific. And then

Divia (25:39)

Yeah, I got it.

Shea Levy (25:54)

an application of that view to ethics that ethics is a scientific thing. And I agree that ethics is a scientific thing, but if you have this wrong view about what science is doing, then... I'm gonna try to see if I can, like... It's... Okay, so this this view that objectivity, especially in a scientific context, is somehow impersonal. It's somehow like you have to take yourself out of the picture and you have to think about like,

Divia (26:03)

Yeah, so can you expand on the wrong view of what science is doing?

Shea Levy (26:24)

What would, know, sometimes that impersonality surfaces as like, what would convince any rational being? And like they're very clear to say like not just a human being, but like any sort of rational thing like would convince them or what would, you these like veil of ignorance type approaches, like what would convince anybody who doesn't have any particular interests at stake? So they're like starting out to say that that's what scientific reasoning has to do.

And then says, okay, if we're going to make a scientific ethics, if we think that's possible, if we're going to do that, then whatever we're aiming for has to be measurable and identifiable in that way. Yeah, has to be legible in that, but that particular view of what legibility is, right? Like.

Divia (27:05)

to be legible.

Right, legible to some kind of view from nowhere type of thing.

Shea Levy (27:14)

Yes. Right. And then what a lot of people then do is say, but we can't find that. And so what is going to ground this process is what we can make legible is given an ultimate preference, X would be good. But the ultimate preference isn't itself legible or it's made legible in non-ethical ways. Like it's legible because it's what you evolved to have or it's legible because it's, you know, what...

culture has acculturated us to have. like that's where I think the other, that's where I think these, these kind of two paths merge. Like either you think you can succeed at this project fully for ethics and there's some like just intrinsic goodness you can measure with a, you know, a goodness thermometer about the world. Or you say, no, you can't do that.

Divia (28:01)

Moral realism, right? I mean, people maybe disagree about what that means. Yeah.

Shea Levy (28:03)

I'm a moral realist though, so like that's my point. It's a view of what realism would have to mean. I think that...

Divia (28:10)

Yeah, I consider myself a moral realist also, I think. Though again, sometimes when I try to talk to people on both sides of this, they seem to get really different, really varied answers about what moral realism is supposed to mean.

Shea Levy (28:13)

Yeah, okay.

Yeah. Yeah, I so like, guess I say I'm a moral realist because I think that kind of gets at the issue, but it's, think this is an area where the...

the conceptualization that we have, not with, not a, of an object level position, but of like how we categorize the positions, slants the discussion to rule out the right answer.

Divia (28:43)

Mm-hmm.

Yeah, I will say, so in terms of who's saying this explicitly, think, and apologies if I'm misrepresenting anyone, but I think this is Spencer Greenberg's position that rationality is for how you achieve your values. Values can be whatever. I think he calls it valueism. I think you're saying you can see it in Eliot's writing and maybe in Lesserang in general, but I think this is something that Spencer will be like, yes, this is my position. Yeah, okay.

Shea Levy (29:14)

Yeah, yeah, yeah, that's and like I mean, I think if you where you see elements of this is like People talk about like preferences and preference stability and like this there's a sort of notion the notion of preference and the way That fits into rationalist discourse. I think implicitly bakes in a lot of this right like it's like obviously there's I think there's a sort of like innocent path there because there's like a preference like I prefer

like long video chat discussions about philosophical ideas and my wife very much doesn't, like she would not be having fun right now and I am. And like that's, that.

Divia (29:52)

Therefore, I should like offer it to you and not to your wife and stuff like that.

Shea Levy (29:55)

Right. Yeah, right. So like, should seek it. like, so there's, there's a real phenomenon that people are seeing here. And like, it's also not only like that, but also like, the fact that I have this preference and she doesn't doesn't mandate that one of us is right and one of us is wrong. Right? Like, yeah. yeah, with that, but like, then there's a sort of like, okay, well, that's what all of valuing must be like. And like, really, it's a

Divia (30:11)

Totally. I think everyone would agree. Everyone reasonable would agree about that.

Shea Levy (30:22)

Like I think it's more the better way to think of it is that like there is optionality within a general positive view about what a, not sorry, positive view is getting too epistemic. There's optionality within the realm of like what is a good human life. But like the nature of what a good human life is is not.

downstream of those choices. At the right level of abstraction, it is a fact of reality what a good human life is. Now, there's things to say about that that I cringed with that formulation, but like, first line of the fan, I'll put that there.

Divia (31:03)

Okay, I'm still a little worried that we don't disagree about as much as I thought. So what is something, maybe you have a guess, like what's something that you think we probably disagree about?

Shea Levy (31:13)

I, so I don't know. I think if my guess is there's something in epistemology and maybe, maybe some like metaphysical stuff that we disagree on. that's, that's only because of things around, like

Like, okay, the discomfort I have with internal family systems. We haven't gone into that, but like, I suspect your willingness to...

Divia (31:38)

yeah, we can talk about that. I do like it.

Shea Levy (31:47)

Embrace it despite whatever concerns you might have. I suspect that's downstream of some like Yeah Yeah, yeah, I mean maybe maybe let me start at the bottom then because I think that's that this is maybe like Yeah, okay. So my view and again, I don't think this is every single rationalist. There are definitely rationalists who disavow this explicitly but still

Divia (31:53)

Okay, yeah, let's try it. This is a good trailhead and we'll see what happens at end of it.

It's like with libertarians, they say, you know, there might be two rationalists who agree with each other, but I'm not one of them, right? That's what they say.

Shea Levy (32:17)

Right exactly. Actually libertarianism is definitely a place where I think we would disagree So let's let's so they I think the basic thing is Again, so we have this pre explicit philosophical Value that rationalists tend to share of caring about what's true and what's right and seeing that on through or seeing

Divia (32:22)

That might be also true. Okay, let's get.

Shea Levy (32:45)

some elements of the post-enlightenment scientific and industrial and cultural establishment as reflecting that. Not everything, but like something about the way science has developed, something about the way like our culture has developed, some elements of that in some ways reflect the success of like taking seriously what is true and or enable us to do better at taking seriously what is true.

Divia (33:14)

Okay, I mean, that seems like a pretty narrow statement. Some asked, I'm like, yeah, okay, I think I'm on board with that so far.

Shea Levy (33:15)

So.

Yeah. So, like there are people who either reject modernity altogether, there's like the religious mindset which is like the real religious mindset, not the like, you know, postmodernist, like I'm religious, but like all other religions are fine, but like the religious mindset who thinks their religion is true, takes their religion very deeply seriously, and like thinks like God literally exists and literally did these things. That religious mindset you could sort of say cares about what's true.

Divia (33:27)

True.

Shea Levy (33:48)

but not in a way that embraces any element of modernity. And then there's the other side, which embraces some element of modernity, but then thinks that implies that you are, that, like, taking truth, like, the whole post-rat ethos of, like, taking truth seriously is for squares. Like, that's, that's the-

Divia (33:52)

Right.

That's why I rationalists in my Twitter bio. It's not the only reason. Like, I really do come out of that community, but I'm like, no, I don't think that's true.

Shea Levy (34:12)

Yeah, right. Yeah. So like, I think that that's why I put those two together is because I do both to kind of just anyway, so you have that. But the I think at the bottom,

The rationalists are by and large materialists. They take the success of the scientific revolution and the ability, sort of the reductionists success of basing chemistry on physics and biochemistry on chemistry. then basically they're saying, okay, let's take that fully seriously. We could talk about different variants of materialism, but basically what we care about is physics, things moving around, everything is built on that.

in some cases there's a sort of degree of taking that seriously where they'll say like there's no mental anything and some will say yeah there's mental stuff but they don't grapple with like how their worldview like actually doesn't allow for mental things so that's a very like basic philosophical issue how does that how does that apply in different cases is an interesting question yeah

Divia (35:17)

term by the way that I sometimes use on Twitter called the Materiast Epistemology Industrial Complex.

Shea Levy (35:22)

Yes, okay, so you, okay, think you, okay, so like, if you reject that, if you reject that, and it sounds like you do if you have that kind of like, dismissive terminology for it, like.

Divia (35:28)

Yeah. I think so. Yeah.

Shea Levy (35:35)

In what sense are you rational? like, I think part of the like, centering,

Divia (35:41)

Okay, I do want to answer that, but you were gonna say some stuff about internal family systems. I really do like that.

Shea Levy (35:46)

So okay, let me let me jump to the internal sorry I forgot about that right the internal fandom of systems thing is like my guess and I don't know enough about your view but my guess there's there's an element of like

Divia (35:54)

Yeah, well, we'll figure it out.

Shea Levy (36:05)

Treating yourself and the development in psychology of treating yourself as a third party, a third person object to evaluate, that's like, internal family systems, it's not literally true that there are multiple yous in your head, right? Unless you're a...

Divia (36:24)

Okay, but internal family systems for the record, calls most, it calls a bunch of things parts and then also talks about self.

Shea Levy (36:33)

Okay, so like maybe, right. But okay, maybe part of it is that like the popularization that I've heard of it sounds too much. It's a sort of thing that, okay, I guess maybe I'll let me just say like what I've heard and what makes sense. What seems like is going on to me is that like, it can sometimes be helpful to sort of like, for the sake of a like,

Divia (36:33)

So it's not trying to say they're all the same thing.

Is it fat suit?

Shea Levy (36:59)

thought experiment, like treat certain motivations or thought patterns or something like that as distinct from you as like a separate person or a separate like motivational system that is either distinct from you or distinct from other motivational systems that are then like somehow at play at different times or you need to integrate in and like I can see why that would be helpful to like as a like tool for like

working your way through a problem. But as a literal description of what is going on psychologically, I think it's false in a way that if you like, try to seriously integrate it with your reflective experience, you couldn't hold coherently. Like...

Divia (37:32)

Okay.

Okay, let me try to hash this out. So let me give you my brief description of what I think internal family systems is. Okay, so I believe it was invented by a guy named Richard Schwartz. And I think he got there sort of phenomenologically, like empirically, he was talking to different people. And he noticed that sort of interesting things happened when he asked them certain questions, such as like, and because talking about parts, like this is the thing people just do though. Part of me wants to stay in bed, but part of me is like, nah, you should get up and go to work, with a normal thing to say.

Shea Levy (38:14)

Yeah, yeah

Divia (38:16)

And then he used to be like, okay, well, what is that part of you think about this? like people would go with it and do the answer. And then he started getting some interesting answers when he would ask, for example, questions like, okay, now can you ask that part of you to step aside? Because sometimes they would and there's a little bit of nuance to like, that's not necessarily gonna work. But it is also my experience working with myself and with other people that sometimes this really does happen. And it...

it means something like then people are like, okay, and they do actually sort of set those concerns aside, which doesn't mean they won't come back or anything like that. And he also noticed that when he sort of kept asking people to do it, again, there's some nuance to people, it's not always gonna just work straightforwardly, that he'd be like, okay, what if what's, and people will be like, no, no, this one doesn't feel like a part, this one feels like me. And that's what in IFS he calls self. it characteristically, like it has some qualities, like people experience more curiosity, more compassion, more whatever, whatever.

I've tried to sort of ask other IFS practitioners about this somewhat, like I've taken some classes and things. It seems like people are now like, yeah, well some parts sort of have more self than other parts and whatever, like, which seems kind of right to me. It seems like it's certainly true that people have some perspectives that are not totally integrated at this time. You would agree with that, right?

Shea Levy (39:33)

Yes, well, mean, like, yes, I mean, think that's a, if it's an important one, think that's in like a flaw. Like if it's something like surface level thing.

Divia (39:36)

It sort of should be uncontroversial, I think.

We get, but that's the point of the IFS process is to integrate the things. Like the first time I talked to my IFS therapist that I used to work with, she was like, I don't really have parts like the way I used to have parts. Like it worked basically.

Shea Levy (39:46)

Yeah, okay.

So maybe it's just the metaphor has been to, like, the metaphor of family, I don't know.

Divia (40:02)

I do think it's a terrible name. think it was because someone else was a family systems therapist. And then they were like, we'll call it internally family systems.

Shea Levy (40:07)

Maybe then the only disagreement I have is that adopting a name that doesn't actually accurately describe what you're doing. I guess on the one hand it's interesting and reassuring because now I want you to send me what you think is your best IFS, the best IFS resource. that sounds more like, so the way I would conceptualize the same thing is there are

Divia (40:17)

Yeah.

Hmm.

Shea Levy (40:37)

You can have different thought patterns and subconscious identifications that don't integrate at any given point, which depending in certain contexts, you can like oscillate between or both have both, you know, subconsciously activated at the same time leading to a conflict. And you need to sort of tease out like, which is which is true or more usually it's not which is true sort of like

Divia (40:45)

Mm-hmm. Mm-hmm.

Mm-hmm.

Shea Levy (41:06)

How do I reconceptualize this issue to understand like they're both identifying something real about the situation, but like not in a way that I can integrate these together. But like I think the biggest question I have with it is with the description you have is to what extent these are like coherent over time. And I guess like maybe maybe the answer is they're coherent over time insofar as you have not done the work and you have like continually acted on one.

Divia (41:08)

Right.

Wow.

Shea Levy (41:35)

or the other side of a given divide.

Divia (41:38)

Yeah, I think sometimes they are, sometimes they aren't. I guess a different thing I would say is, so I believe my IFS therapist when she was like, don't have parts like I used to have parts. I tend to think it's not usually, it doesn't usually make sense to try to sort of resolve every polarity fully. Like if it were free, sure, but it's not. And so I guess I think that there's some shifts that I think of as more like, I mean, nothing's always worth it.

but closer to like overdetermined that it's worth it. That if some people's perspectives are like really sticky and unresponsive to evidence and wrong, then there's something I think very good about trying to free that up and get things like moving in a way where they can't integrate. But then I think like given, I don't know, like given my practical constraints, I think it makes sense for some things to be sort of persistently represented in more like a polarized way, but like that's pretty fluid.

Like a friend of mine, and I know it's not the same with groups, but as an analogy, I have a friend of mine who sometimes talks about like, I don't know, he would be involved in planning workshops. And sometimes like the one person who was always like, we should have more structure. And the other person who's like, nah, we should talk to each other more. And there's some impulse to be like, okay, let's like really get on the same page about this. Let's put you in a room and like resolve it. And I think sometimes that can be right, but given practical constraints, I think sometimes it just makes more sense to have one, to have the group kind of split and you argue it out and like, whatever.

Shea Levy (43:06)

I do think the group versus individual thing is... No, no, I know, know, know what I'm saying. I guess I think the analogy fails, I guess. This is my...

Divia (43:10)

I said this analogy. Yeah.

Okay, I think it doesn't. I think it makes sense if there are certain patterns in me that are like, that are more attuned to some things. You can make it more concrete.

Shea Levy (43:23)

Can you like, are you able to give a question? Yeah.

Divia (43:28)

Yeah, let me see. I, I don't know. I remember, so this is like a parts type description of my life. can critique it if you want. So about a little more than a year ago when I was moving, I was pregnant. It was just like a lot going on. And I remember was talking to my friend and I was like, yeah, some part of me is like sort of screaming at me that this is not okay. And we're like, okay, well, some like assumption that like, well, that must mean there's something better than that, right?

And I sort of sit down and try to like introspect. I'm like, okay, I think this part of me is sort of like, look, it's not my job to figure out what the, what thing would be better here. It's just like locally, like this part of your mind really doesn't like it. So it's like a request from allocation of more resources towards making it not like this. And ultimately like, I think this, you know, this move to introspect and whatever, it's sort of like an integrative move. But the part where some part of me is like, this is more like, I'm in charge of looking at this part of your mind and being like, are things okay over here? And it's like, no.

That seems fine to me.

Shea Levy (44:25)

So do you think there is actually some part of your processing which is the same thing that is caring about that same issue? That's the piece, I get in the moment.

Divia (44:38)

issue. I don't know about issue, but I think there's certain contours. Like I think I have different types of, I could try to be concrete, but it's hard. Like I think I have different types of processes and that that makes sense because like, I don't know, I, for example, like I certainly have visual processing, right? Like do you use a really low level example? I think sometimes it's true that like my visual processing is kind of tied.

because I've been looking at really fine details or something. And so I think there is a relatively persistent part of my mind that's like, hey, part of you is like, give the eyes a break right now. Or something like other things like that.

Shea Levy (45:11)

Yeah, that's how I'm on board with I'm on board with I think the place that I'm less clear on is like,

different persistent carers about or attendance to different values. That's where I am like, yeah, so like definitely different kinds of processing. Like sometimes I can think like do like deep abstraction or like, you know, very like thorough, like thorough reasoning through a problem. And sometimes a different process is like very generative thinking of just like,

Divia (45:28)

Mm-hmm.

Shea Levy (45:50)

coming up with ideas that might solve something and that's not the same thing as checking all the boxes, right? Those are different. Completely unborn with that.

Divia (45:52)

Yeah.

And I think there will persistently be different. Like, I don't know, maybe someone somewhere could integrate them, but it doesn't seem like a good idea to me. Right, they're not in conflict. Though they could be locally in conflict at some time, potentially.

Shea Levy (46:02)

they're not in conflict. They're not integrated. that's why I wouldn't call it.

I mean, think it's more, the conflict is not between, it's not like those processes aren't conflict. The conflict is which of those processes should I use? It's not like, I guess they're in conflict to resources. I can't do both, I can't do generative stuff at the same time I'm doing the really thorough, like, checking the boxes stuff. I can't do those at the same time. But to me, if there's conflict, it's either causally, I'm not sure which,

Divia (46:21)

Okay. Yeah.

Seems right, yeah.

Shea Levy (46:39)

will lead to the aim I'm going for or like I don't know which aim I should even be going for, right? And like it's that part where I, it's that, that's the piece and maybe the answer is like that's not what they mean by parts that are persisting, but like that's the piece that's always stuck out to me like.

Divia (46:44)

Okay.

Shea Levy (46:59)

Like there's not a part of me that's dedicated to like caring about my wife, right? Like, okay, so like maybe.

Divia (47:05)

I would agree with that. I cannot, like here, look, a central example of like a part pre-IFS work would be something that has some particular, so like I always tell the same story because I like to tell stories about me, so it's not dividing, it's privacy, and some of them are weird or personal or hard to explain. So this is my one that I tell. So early on when I was working on this, I came up with, and it was experienced to me as a part that was like really,

sort of messed up. It was the part of me that thought I needed to always feel guilty about things. And I was like, all right, let me look, where did this come from? And I remember it actually, because I remember being this age and like not really having it. And then it was there and being like, well, this sucks. Does it have to be like this forever? And I guess the answer was like, no, only 20 years or something until I finally went back and figured it out. And there was some dumb thing that precipitated it where I'm guessing it's overdetermined that something like this would relatively overdetermined. Like I don't think it was.

super caused by the thing. But there was this incident where I knocked over this pile of cups and someone was like, hey, you knocked over a pile of cups. And I was like, no, I didn't. Cause I didn't realize that I like happened. I just didn't know. And then later I was thinking about it. I was like, I did do that. But like, it seemed like there was no way back. was like, I can't, I don't know what to do other than like feel guilty about it forever. And yeah, and it's, it's dumb. Like it's not a good way to operate. Obviously. I think it was sort of my best attempt at sense making at the time. It seemed like I was.

sort of stuck in some bind where I'm stressed out and I was like, I could either sort of forget about what really happened or I could feel guilty forever and I chose to feel guilty forever, even though it wasn't really forever. And of course, as an adult revisiting it, I'm like, yes, it is fact possible to remember what happened and not feel guilty forever. Like these things are fine. I'm like doing some mental ritual to accept that and whatever.

Shea Levy (48:51)

So I noticed that other than when you introduced it, you didn't talk about that as a part of you. You talked about that as like a way you processed a certain thing, like you processed it.

Divia (49:00)

Yeah, but at the time it felt like a part. Before I processed it, it seemed more part-like.

Shea Levy (49:03)

Okay, so maybe that's.

Okay, so maybe that's the piece. So I'm very interested in this conversation. I no longer think it's going to get at like a, it's not a big difference, but I'm very, so up to you you want to keep going. So like, the feeling like a part, I guess it's like, you process this in a certain way and like, that feels like it doesn't, it doesn't feel like it like fits with your general way of thinking about things. Is that sort of?

Divia (49:10)

Yeah. Okay, all right, cool.

Right, it seemed like, I don't know how to, like it just seemed like some, I don't know what's literally true about it, but when I would visualize it, it seemed natural to visualize it as sort of like a pretty separate piece of me. And I could like follow its threads and it was saying sort of a bunch of things that all seemed, like my guess, I have a pretty strong prediction that if I'd started following this thread on a different day, we would have gotten to the exact same thing.

Shea Levy (49:50)

Yeah, okay.

Divia (50:03)

that it wasn't very contingent about the way I was asked the questions or something. I think like that can happen. I think it's basically a bug if it ends up being a really contingent process. I think it wasn't. I think it really was something that happened that sort of locked in place like a piece of my personality that was failing to integrate with the rest of my personality.

Shea Levy (50:07)

Right.

Yeah.

Divia (50:23)

Which I think is a common human behavior, to be clear. I think most people have things like this. I think I still have things like that, I would assume, but I think not really the same way I did before.

Shea Levy (50:35)

Yeah, think, okay, so I think there's a sort of like, what I would want to see to like embrace this as a like, more thorough characterization of what is meant by it, what makes something apart versus just like a thought that occurred right now that is against my other thinking or like, even a persistent pattern of thinking, like maybe that's all it is, maybe that's all it is, a persistent pattern of thinking.

Divia (50:58)

I think it's some a persistent pattern of like thoughts and emotions and behaviors that tend to be clumped together and there's some sort of like time series thing and it used to sort of Disturbed me a little when I talked to people after I've been studying I've had something like you kind of like switch parts in real time when I was talking to you that's weird like your whole affect is now different from how it was a few seconds ago because you've inhabited a different perspective and I think some people do this more than other people but I think it's a real thing

Shea Levy (51:15)

Yeah.

So maybe this is the piece that maybe isn't actually part of IFS but has felt like part of IFS when I've heard it generally described, is that these parts have agency in some sense, as opposed to like, and like I don't, in your description, that's not there. So maybe that's like the piece that's like, that like,

Divia (51:41)

I don't, how is it, is it not there? I mean, I think it had agency in that I really was feeling the guilt the whole time. Okay, you don't mean that.

Shea Levy (51:48)

No, no, no, not that as a fact, but that like, there's like a part of you whose goal is to make you feel guilty, who is trying to allocate resources to you. Okay. Yeah. Okay, I think there's something.

Divia (51:55)

Yeah, I would agree with that. I think it was doing that, yeah.

Because I think it thought something worse would happen if I didn't do that, right?

Shea Levy (52:07)

Yeah, so that it would think something worse. that's, think that's the piece where it's like...

Divia (52:10)

Because when I would introspect about it, I'm like, go through some processes, like, well, what is the part of you saying? And I remember being like, well, it seemed like it cares about what's true and it doesn't really know how to, like, I would get answers like that and they felt resonant. then, like, I think I also would always, well, maybe that's a distraction, but like, part of why I take it seriously is that then when I did some mental reconceptualization of it, it changed. Like, I really didn't feel guilty in the same way afterwards. That's why I'm like, it was a thing.

Shea Levy (52:19)

Red.

Yeah, I mean I think, okay, so like I think this might be the basic disagreement and hard to tease out, but like.

I totally buy, and not even saying this is other people, I have things like this. There's a persistent pattern of thought, maybe a particular implicit belief or value that you have, in context where it's activated, regularly puts you in the mindset or brings up certain emotions or considerations that don't integrate with your normal mode of thinking. That I buy.

It's the next step of conceptualizing that, not just as a persistent aspect of your overall subconscious, but as something that has goals or something that has agency. Pinning that down would be hard, but I think there's something about that that that's the part I disagree with. And my suspicion now is that if we got to the point of

Divia (53:34)

Yeah.

Shea Levy (53:45)

making explicit what that thing is, would either dissolve and we would be fully in agreement, or if you said it explicit, you'd probably say something like, I don't believe that literally. But that's my guess.

Divia (53:53)

You

So one thing I do think about it is it seems like I was surprised at first when I would query these things how coherent their answers were.

Shea Levy (54:11)

Yeah, so think there's an explanation for that, that doesn't require them to be like, sub-agents with their own minds and thoughts.

Divia (54:17)

But I'm like, what does that mean with their own minds and thoughts? Like, don't think... Yeah, yeah, no, so that's why I'm asking. I'm like, I think it's... Okay, maybe an objective thing I could say is I think there's quite a lot of structure there. And I don't think the structure is formed during the process of asking questions. I think the structure was already there and is revealed through the answering of the question. Okay, we agree with this.

Shea Levy (54:19)

Well that's part of my point, that's part of my point, that's part of the issue I have is that like...

Yes. Yes. So I agree with those two things. I agree with those two things that that is a thing that happens. Yeah. And so like, think, I think basically, my objection is that like, I think, and I would have to know much more about the details of the theory, but I think basically, either when you say like, it wants this thing, or it had this goal or something that either ultimately you're like being sloppy with words or

The only way to ground it is some little homunculus with its own. And obviously if you say that, that's ridiculous. You don't endorse that.

Divia (55:07)

You

Okay, I think, no, no, here's what I think it is. I think that there's some part of the thing that I would more call myself that it's safeguarding. And so I'm like, it is ultimately me, but it's not, I think parts of my cognition will take responsibility for different angles on the values.

Shea Levy (55:23)

Thank you.

Let me say, instead of interrogating your conceptualization, let me try to give mine for that kind of thing, right? So, let me see if can find a concrete example in my life that...

Divia (55:31)

Okay. Yeah.

Shea Levy (55:49)

Yeah, okay, so like, they're less so now, but there is a sort of like, common pattern of thinking that I have fallen into of like, when things are not going well in some respect, being very reluctant to surface that to others, especially to Alyssa. And like,

Seeing it as an additional failure if I need to like make that visible and like ask for help or like get support on it in some way or other and you know, like so what I would say is that That stemmed from a mistaken view of how to achieve a real value that I endorse right there's a real value of self-sufficiency and

I don't have a full articulation of masculinity that ties into that. I actually do things of real value. And I do think there are things that I do now that I might have historically confused for this, but I now think about it in a very different way. So in that sense, sort of mistaken thought pattern, which at the time didn't integrate well with the way I thought about

social support and pursuing values and being open and with communication and all that, right? I do think it stemmed from a value that I hold and it's related to my values, not its own values. It's related to my values.

Divia (57:31)

Yeah, okay, I will agree with that it's your values, not its own values. I don't think the values are separate.

Shea Levy (57:35)

Yeah. but like, guess, I guess the, the, the piece that I still like, it's not like a subroutine that I've set up in, you know, subconsciously to like defend that value. Like it is, it is like,

Divia (57:50)

Why is it not?

Shea Levy (57:54)

Why is it not? That's a good question. think, okay, I think what it comes down to is like...

It's a, I think the fundamental is a mistaken thought, like a mistaken idea about like what that value requires or entails in a certain context. And so I guess, I guess, okay, it's like, once it's integrated, it's not like there was a separate agent, which is now integrated in. It's that like, my ideas are now coherent. And so like,

Divia (58:16)

Yeah, I would-

Shea Levy (58:31)

One idea, one view, one thing I hold about reality is this answer in relation to this value. And another thing gives the same answer because those things I hold are now integrated. When it gives the same answer, it's not like this idea has agency. It's more like... Yeah.

Divia (58:36)

Mm-hmm.

Okay, all right.

Okay, here's what I think I'm willing to defend. Here you're talking about this. Have you heard of Martin Buber's I and Thou?

Shea Levy (58:55)

I've heard of it, but I don't know what it is.

Divia (58:57)

Okay, I don't know what it is either. But one of the things is it's in contrast to I it. I think that it's according to my, like in a bunch of ways really, to have an I thou stance towards these parts of me than to have an I it stance towards them.

And to interact them as a thing that there's mutual respect and mutual change back and forth and not objectify it.

Shea Levy (59:17)

Thank you.

Yeah, so but it's not objectifying it. It's like, so like the, I agree with it. That I grew up is a, okay, but the vow externalizes it in a way that I think is wrong. It's like an I, me, right? so like, mean, so like, think there is like the right way, the way that I think about sort of the way to kind of handle this thing is like,

Divia (59:27)

Well, I don't know if you are. I don't have a claim that you are, but I think it's very common thing that people do.

Yeah, okay.

Shea Levy (59:54)

there are values at stake leading to this thought process and this emotional conflict. There are values that I hold at stake and I want to identify what are my values and then maybe beliefs about those values that are leading to this conflict and how can I align those values. So it's not like, so like

Divia (1:00:13)

Yeah, okay, but here's, think I'm mostly with you about the part like, it's more accurate to call it my. I'm like, yeah, I think sort of pragmatically when having a conversation, be able to hold lifetime experience talking to someone else that it's kind of nice to draw that in or something, whatever, it does seem more accurate to call it my. But the thing, seems like, it seems like you're, it seems like you're talking about it as though the agency is in the part of you that you're more identified with. And I'm like, I think it works way better to treat it,

as like, who knows what this thing will do now presented with this information? Like, not like I'm gonna change it.

Shea Levy (1:00:46)

So the agency is not fully within my present considered self. The agency is within me across a lifetime. Right? But it was me who had the initial...

Divia (1:00:55)

Yeah, yeah, so maybe what I'm I don't think I disagree with that but I think I'm like look I think the language that you use in the moment to describe it will and it maybe depends on the people actually I think it does but I think for many people at least the ones that resonate with the process using the sort of standard language about the parts and the you and the me brings to mind the right sort of stance for actually yeah Okay, you think it's gonna

Shea Levy (1:01:20)

Okay, here it is. Here's the thing that I was suspecting might be there and I'm not sure. This is now getting into the rationalist thing. That kind of pragmatic view toward, like, it's one thing to hold that explicitly as like, this is purely a like, thought experiment to trigger a certain thing, but like, letting that shape your...

Divia (1:01:25)

Yeah. Yeah.

Shea Levy (1:01:45)

ontology about like the way you actually think about it in your own thoughts about like what is actually happening. That's a thing that I think like I thought that was much more suffusing this than it actually is. So it might be something that but like what I would say concretely to that point like if that's true

Divia (1:01:55)

Okay. Yeah.

Yeah.

I think are no perfect pronouns for talking to one part of oneself.

Shea Levy (1:02:07)

Yeah, so like then I would say like but that's how I would formulate I'd say look It's not actually a separate part of you, but there's not like we don't have a good way of we don't have a good way of talking about this So this is the best we have right now and it would be in advance if somebody came up with a better conceptualization and like I would

Divia (1:02:20)

Okay.

So for the record, I've done this with people who are like, it's not a part of me, does not want to be called a part, wants to be called a whatever. And I do always go with what people say. And I try to take it seriously too. I'm like, okay, we're talking to a whole of you, fine, cool. Okay, it's just you, fine.

Shea Levy (1:02:33)

Yeah. So, but like, yeah. But like, mean, I think the like, so like, this, but like, the epistemic part of it is like, in my own thinking, in my own notes, when I'm not like, happy with conceptualization, like, I will put things in scare quotes in my own, like, like, this is not the right way to put this, but I don't know, or it's not worth like, figuring out what the right way to put it is in this context, but like,

Divia (1:02:53)

Yeah.

I'm not yet convinced that part of me is particularly wrong. I don't know, what is part? Part, means like...

Shea Levy (1:03:08)

Again, if you're attributing separate motivation to that part, or separate agency, or separate

Divia (1:03:16)

I think it's separate, but I don't think it's... I think it's... No, I do think it's ultimately mine. But it's not all of... It's not like, if I just say mine, I think that calls to mind like a more integrated thing. And I think I don't know what it is, and I think I should treat it as like some unknown to be curious about. But no, I don't think it's separate.

Shea Levy (1:03:19)

Not that you don't think it's yours.

Right?

So, okay, maybe here's a thing.

when I'm in that state of conflict, don't like, I can identify which sort of side of the conflict is more, more in line with like, what I would expect myself to feel on my normal thinking. But I don't give that side of the conflict priority. Like, my going in is both sides are stemming from real

real values that I have or think I ought to have and real ideas that I have. like, I'm sometimes fishier about the one that's kind of more in line with my normal thing, because that's the one easier to justify because it's like, you know, I can put words to it much more readily. But like, in that moment, it's not, I guess, in like,

Divia (1:04:26)

Yeah, cause you can like slip something in there. Totally.

Shea Levy (1:04:40)

Yeah, maybe this is a little bit phenomenological. I guess I would say like, I think it's a phenomenological thing that stems from a healthy view of my psychology, but like, maybe a little bit more of phenomenological claim than a like, metaphysical claim that it doesn't present to me, it presents to me as conflict. It doesn't present to me as like, something...

one piece of it waging war, that's too violent and extreme, like, interceding with the rest of me. It's two strands of thought, or maybe let's say two parts of me, two parts of me conflicting, it's not like a part of me conflicting with me. I don't know if that-

Divia (1:05:07)

Yeah.

Yeah. yeah, I agree with that too. I think parts are ultimately never really in conflict with the self. They're in conflict with other parts. I do think many people do describe it as something more like a war inside themselves, for what it's worth.

Shea Levy (1:05:37)

Yeah, no, I think that's true. I think then my question with my sort of like theoretical psychologist hat on as opposed to critical is like, okay, what is actually going on that makes it feel that way?

Divia (1:05:53)

I mean, I think often there's a pretty total war type attitude from some of their parts that are like, there's no way to achieve this part of my values through normal means. So instead I'm gonna do all these kind of extreme things. I think that happens.

Shea Levy (1:06:08)

Yeah, like, yeah, okay. So, right, so maybe one way to think about that is like,

you've sort of accepted on some level an inherent conflict between two deeply held values. And so depending on which

Divia (1:06:28)

And if I were doing something therapeutic with someone, I'd be like, okay, well, let's like, you know, like I wouldn't necessarily be taking that as true.

Shea Levy (1:06:34)

No,

Divia (1:06:46)

Mm-hmm. Yeah, okay.

Right. Right.

Shea Levy (1:07:03)

they have at some level accepted an inherent inability to gain two key values that each value individually they've accepted as necessary for their happiness and they've also accepted that they're an inherent conflict and so they put themselves in a position to do that.

Divia (1:07:16)

Mm-hmm.

And they don't see a better resolution path also, and nor do they necessarily have faith that there could be one, except, I don't know, if they're telling someone about this, maybe on some level they do.

Shea Levy (1:07:29)

Yeah, and then I think that the extra piece here is that like, it depends on the combination of their pattern of thought and what has like what is relevant to them in the situation, which of those values is more motivationally salient and like winning out in that and like how those conflicts arise and how

Divia (1:07:50)

Mm-hmm.

Yeah, like think some language that the transactional analysis people use that I haven't seen elsewhere is they talk about how people, like there's a sort of normal thing people talk about where the one, when there is some sort of polarization, often people describe one of them as like, well, I think this, but, and then another part that's connected, which I think is the word, which is the one that's controlling your body, which is not always the same for people. They'll be like, I don't know why I'm doing this. This seems like a terrible idea. And yet they're doing it. So obviously there's some sort of different process.

Shea Levy (1:08:23)

Yeah, yeah, sure, yeah. Interesting. Okay, so I think my takeaway here is if there is an existing resource which is not therapeutic, I'm interested in what do they think is actually going on and what is the right way to conceptualize it and then separating that from the question of what's the right way to talk about it in a therapeutic context to make it easy. If you have any such resource, I'd be very interested in that.

Divia (1:08:35)

you

Yeah.

Yeah, I don't know what...

Shea Levy (1:08:51)

What's the ontology of IFS?

Divia (1:08:54)

think it's fundamentally really pragmatic. Like think that's where it came from. Is the guy working with people and being like, people seem to be talking about this. Maybe I go with how they're experiencing it and.

Shea Levy (1:09:02)

This gets back to the epistemic point of like, if that's how I'm holding an idea, I want to be very clear that I'm holding it as this is a pragmatic tool and it would be useful, maybe not important, maybe not useful enough to be worth going into, it would be useful to actually understand the facts that make this work. I don't need to know those facts in order to know that if I do things this way, it works. So I have a similar attitude towards gamification.

Divia (1:09:07)

Yeah.

Yeah. Yeah.

Yeah.

Shea Levy (1:09:31)

I have a more thorough view now than I used to about why it is that gamification is useful. I think it's acceptable to say, yeah, gamification seems to work. I do the thing that I want to do better. that is enough. That is something you can hold as true and valuable without taking a stand on some deep, what human motivation is all about is dopamine hits or whatever.

Divia (1:09:57)

Okay, look, look, I think I'm on record being like, ultimately this is a pragmatic thing, but I think it does make some claims about like structurally what's going on in people's minds.

Shea Levy (1:10:07)

Yeah, so that, think I would love to go know what those are and I don't know if we can get into that here, but...

Divia (1:10:11)

Okay, yeah, bookmark. We can maybe talk about that. I think I have talked about it a little, that one such claim that I think I already flagged is that if people sort of manage to approach certain parts of their cognition with curiosity and compassion, there is quite a lot of structure there. In ways that are not evident, usually without doing that. Claim one.

Shea Levy (1:10:26)

Yes. Yes. And so I completely agree with a lot of strut. There's structure to this conflict. It's not just that that part disagrees. I just shut it down or ignore it. Right. Like that is. Yeah.

Divia (1:10:40)

Yeah, and it's not like the only option that people could get is confabulation if they try to go down that road. That's one claim. Anyway.

Shea Levy (1:10:43)

Right. Yeah. Cool. Okay. So I guess maybe getting back to the original discussion. Why don't call yourself a rationalist?

Divia (1:10:53)

Yeah, okay, so one thing is that I really do come out of that tradition. Like think that reflects my, I don't know, like I have certain things that I've sort of believed since before that, but then I found overcoming bias and those people and like I moved to California, I met a bunch of those people and it was a pretty transformative time for my way of thinking. And I think I came out in a way that was like deeply influenced by that.

Shea Levy (1:11:02)

Okay.

Divia (1:11:21)

community mostly to be more like it than before.

Shea Levy (1:11:24)

Okay, so you said that way of thinking. Do you think you hold that way of thinking, whatever it is, or is just that it influenced you?

Divia (1:11:33)

I mean, some of it and not other parts of it? I don't know. so, okay, one thing I would say concretely is I think that I grew up in an environment where the sort of, of course, like, I don't know. There was definitely some, like, let's really figure out what's true. Let's point at things that are incoherent and care about them. But the overall density of it was not really enough for me to have a social context that kind of like gelled in that way.

Shea Levy (1:12:03)

Yeah.

Divia (1:12:04)

if that made sense, where like, remember, and you know, I have a couple of less wrong, I have this less wrong draft post about this from like, I don't know, more than 15 years ago, I never wrote it. But I was trying to convey this idea, which is like, okay, and then I got there and I was like, everyone's kind of like enough on the ball, that like some other part of my brain kind of kicked in and was like, yeah, things really, like, come on, let's like really make sense now. Like the social environment is expecting of you, expecting of it, of you too. And it was very helpful for me.

Shea Levy (1:12:32)

Yeah, mean, so like that, I mean, that's, that's the piece, like, if that's what rationalist is, which like, there's a, there's a, there's a,

Divia (1:12:39)

don't know that it's, it's not my full answer. That's the beginning of my answer.

Shea Levy (1:12:41)

There's there's a there's ground to be defended here, right? That that's really what it is to be a rationalist I guess I guess it's too I would say there are two parts to it one is that you have this attitude and take it seriously and to that like you got it or you mostly Experience it within this particular social milieu, right?

Divia (1:12:44)

Mm-hmm.

I it's a little different now. think more of it's in the groundwater, more of the people I've met have different, like it's, I think it's less clear cut than it was then. I also think that,

I mean, and I do, I don't think this is in my bio, because it limited characters, but I would definitely say like a less wrong style of rationalist. Because some people mean like, you don't care about feelings or something. like, I don't mean that. And I also.

Shea Levy (1:13:22)

Yeah. Rationalist is an overloaded term for sure.

Divia (1:13:26)

It is, but that's, it also reflects like a bunch of chart, like if I, like, if I talk about, I mean, maybe one of the conceits of rationalists such as myself is that I basically would never be like, that's rational or that's irrational in the way that people who don't call themselves rationalists frequently do use those terms.

Shea Levy (1:13:48)

Okay, so maybe there's a good disagreement.

Divia (1:13:51)

Look, I think a bunch of my language use and stuff like that, like I think if someone were trying to understand where I was coming from and they had the like, this person sort of grew up to a certain degree on less wrong, like I would make more sense. And, and crucially, that I have, unlike a bunch of other people, I'm not like, and I'm post that now. Because that seems way more wrong to me. Like it's true that I don't agree with a bunch of stuff on less wrong, but that's like, and I disagree with Elia's or about various things, but that's like,

Shea Levy (1:14:03)

Yeah.

Yeah, sure.

Divia (1:14:20)

Typical.

Shea Levy (1:14:21)

Yes, so that's fine. think, okay, maybe there's sort two avenues of attack here. One that came up is like, that I think is tied to like, I wouldn't call things rational or irrational. Like, do you think less wrong is a good name? Is a good thing to ascribe to? To aspire to? Okay. There's an underlying epistemic claim that I think is the same in overcoming bias of like, yeah, you can't be right. The best you can do is just not, you know, eliminate the errors, right?

Divia (1:14:35)

No. It's not a good name, no, but yeah, but it is what it's called then.

Yeah, which is that you can never really be right. Yeah.

Yeah, I also don't call myself an aspiring rationalist. I myself a rationalist. This is another one of those things.

Shea Levy (1:14:54)

So like, yeah, so okay, if you don't buy into that, like, the most I can say, which is not a very strong, like, I guess I wouldn't say I'm contra Divya. Most I can say is that

Divia (1:15:08)

Yeah, sure.

Shea Levy (1:15:12)

I think there are...

There's path dependency in anyone's terminology that is not objectionable, but I think there are elements of less wrong rationalist jargon terminology ways of approaching things that I think reflect some of these issues, even if you yourself don't no longer or don't endorse the underlying claim. So no, no. So like,

Divia (1:15:39)

I do think I shouldn't call myself a rationalist? I've thought about it a lot. Okay.

Shea Levy (1:15:44)

That's I mean, I think part of the issue is like and here I'm gonna here's like here I'm gonna get to like the philosophy of philosophy like What is philosophy for and why do we need one? Like I think you do in fact Need some kind of like But like worldview that you hold and having a name for it and

Divia (1:15:53)

Mm-hmm.

Ethical grounding is back to that.

Shea Levy (1:16:13)

Ideally having a social scene that is at least somewhat aligned to it is extremely valuable. And so like, I guess what I would say is like,

If you don't, yeah, so I think maybe like one of the biggest problems with the rationalist space is that either what it is to be a rationalist is this sort of bucket of philosophical views that I think are many of them deeply wrong, or it includes a sort of meta view that like, it doesn't matter that we all have very different philosophical views. And of course there can be some social things that cross these lines.

Divia (1:16:52)

I mean, what do we mean by doesn't matter?

Shea Levy (1:16:55)

Like, I guess like.

Divia (1:16:59)

Like we shouldn't try to resolve them?

Shea Levy (1:16:59)

But you don't them. Huh? No, no, no, no. Not that you don't try to resolve them, but that like...

Okay, Descartes meditations, he opens up basically saying like, I've known for a long time that like, I can't integrate my shit. Like my ideas, but like, I haven't had time to do it and now I finally have time to do it and this has bugged me for a while. like to me, like, I think his answer is he fails. But like that motivation of like, I need to integrate my ideas, I need to actually have an explicit integrated worldview here. Like I personally need to have that.

Divia (1:17:21)

Okay.

Shea Levy (1:17:40)

I think that is a healthy attitude and it is something that a proper philosophy would inculcate in everybody. Now let's not say that everybody would be a philosopher, right? But like, they would, I mean, in the world I'm envisioning, there would be a culture-wide philosophy which actually integrates and actually, like, it's not that I can blame somebody who is, I can blame Eliezer. I think Eliezer has like,

explicitly thought philosophically and he doesn't have this view but like the fact that that doesn't bug you that like like you don't you can't articulate like an explicit philosophical worldview that ground like like huh it doesn't well even if it doesn't have a name like what is it like it's not rationalism because like you don't endorse like any of the things that i've called out so like what is it if it's not rationalism right yeah i mean

Divia (1:18:15)

That what doesn't

the name and stuff.

I even their name.

You're saying that's okay. Okay, okay, okay. So what is, yeah, I mean, I do care about having a consistent worldview. I don't think that there's any name that I have. mean, like I don't, I do have objectives as friends. You're one of them, but.

Shea Levy (1:18:48)

But that like a basic the basic way you self identify doesn't seem to

Divia (1:18:55)

Mm-hmm.

Shea Levy (1:19:01)

incorporate it. like if I say like if I say like I'm a Christian and I mean it like the old style Christian right like that self-identification incorporates a whole worldview right. When I call myself an objectivist why does it

Divia (1:19:03)

Yeah, okay.

Sure. Yeah, that's, I mean, but...

I don't know. I haven't found some community of people that are like, this is the thing.

Shea Levy (1:19:26)

But like the question is like does that seem like a flaw like maybe not a flaw worth fixing but it does that seem like a flaw to you does it seem or is it's like Well, yeah, that like not about that you're supposed to but that like it would be It would be better if I had it like something Yeah, okay. Okay, so maybe okay again

Divia (1:19:35)

Like in the world or in me? Like am I supposed to make it?

It'd be better if I had a community of, yeah, it seems better if I had that.

So I think it doesn't seem tractable to me. Like maybe I'm wrong, but it does not currently seem tractable on a community level.

Shea Levy (1:19:57)

Like to have a community oriented around a worldview. Your worldview.

Divia (1:20:02)

my worldview. Or like a worldview that I'm like, yeah, it's probably right about things that I'm, like, if I disagree with the community, like I sort of expect to update towards its position.

Shea Levy (1:20:12)

Right, so yeah, it doesn't seem tractable because too much of, like not enough people are aligned with you today or like something about your worldview means it just won't ever happen.

Divia (1:20:22)

don't mean ever. I don't know, I'm like, I'm really sad about that one way or the other. But no, I don't think I mean ever.

Shea Levy (1:20:27)

It doesn't seem to me like I am not going to live in an objectivist community and like I would say like there's an objectivist movement and I would say like I would count on one hand the number of people within the objectivist movement that I like feel like I would let into my like objectivist on clay. Like some of them I don't like maybe like personally but I think they've got like like so like I'm with you there and

Divia (1:20:49)

Yeah.

Yeah, they're individual friends that I feel pretty aligned with, it mostly seems like the people I know that are really taking this type of stuff seriously are mostly like, we're doing some trailblazing thing that kind of works for us. And yeah, of course, ideally it would be more complete than that, but we're kind of on some sort of frontier, so what can we do?

Shea Levy (1:21:12)

Yeah, I guess, to be clear, the primary for me is not the social scene, it's the system of ideas. So if your view is like, yeah, it would be great if somebody, it would be really valuable if somebody could articulate the system that reflects my ideas. And like, you know, maybe.

Divia (1:21:18)

Okay, articulated.

Or if I could more, like I do try to a certain extent, but I also have, maybe this is a real disagreement. Like I think on the current margin, nah, maybe it's not. But like certainly my most, like my newest set of ideas that seem most fundamental, I'm not super eager to be like, let me write those down so that people can read them. Like I wanna talk to my friends about them.

Shea Levy (1:21:54)

Yeah, no, My point is not that everybody has to be intellectual, right? That's not...

Divia (1:21:59)

No, I wanna be an intellectual and I do wanna share my ideas, but I think there's a gap between what is really my most bleeding edge worldview about this and what I'm willing to be explicit about in public anyway. No, that's not good.

Shea Levy (1:22:12)

Yeah, I mean, so like, yeah, like, because you want to validate it and work it out and see, like, get it right. Like, yeah, it's not it's not about a rush. I guess, I mean, again, this is another area where, like, I'm sort of contrasting. Like, what are you talked about that way of thinking, when you're talking about, like, what you took from overcoming bias and less wrong, like, anyway,

Divia (1:22:20)

Okay.

Like so many, like I sometimes forget, cause I'll hang out with people that they really don't care.

Shea Levy (1:22:40)

I call that upfront right I call that out of front like that

Divia (1:22:43)

Yeah, yeah. No, I know. So, but that's, I'm like, that's enough for me to call myself a rationalist.

Shea Levy (1:22:47)

Yeah, okay. So like, I think that's fair to say like, okay, I think it would be, I want to know what's true. And like, the rationalists were the first place where like, I got that as an explicit message.

Divia (1:22:57)

And it's still there. Like if I go to a Less Wrong Meetup and someone's like, if someone says something and I like know that that's just actually not true, I'm like, we have a shared way to communicate about

Shea Levy (1:23:11)

Can you hold on one minute? I need to call. I will step back in a minute.

Discussion about this podcast

Mutual Understanding
Mutual Understanding
A podcast where we seek to understand our mutual's worldviews