Mutuals
Mutual Understanding Podcast
Ozzie Gooen
0:00
-1:29:23

Ozzie Gooen

Creating an estimation utopia and establishing justified trust in institutions.

Coordination among 8 billion people is very tough. We're very far away from doing that with the intelligence that we have now, it's incredibly costly to send information to different people and for different people to learn about each other in order to trust each other…  we should kind of expect that people will have a lot of trouble coordinating on a big scale. But if we could then we’d grow a lot!

In terms of least ‘failing with dignity’, do we think that the public did a good job in trying to investigate this and was misled? Or do we think that the public just like did a terrible job in like doing anything coordinated and just got hoodwinked super easily? I think we fall a lot more into the latter camp

Ozzie Gooen is the president of the Quantified Uncertainty Research Institute (QURI). In this episode we discuss Utilitarianism, improving trust in organizations, communities, and governments, and his work in building better software for thinking, forecasting, and estimation.

Links from the Show

Timestamps

[00:01:00] Worldview

[00:08:00] Starter Pack Philosophies

[00:12:00] How tools for better Epistemics fits into Utilitarianism

[00:20:00] Mistake theory vs Conflict Theory

[00:24:00] Improving EA institutions

[00:30:00] Justified Trust in Governments

[00:38:00] Contracts and monitoring for evaluating orgs

[00:46:30] Estimation utopias

[00:52:00] Centralization vs Decentralization

[00:58:00] The value of a good investigation in the case of FTX

[01:05:00] The importance of the OpenAI Board

[01:09:00] Estimating Relative Values

[01:18:15] Shared intellectual infrastructure

[01:26:00] Epistemically mature civilizations

Transcript

The transcript is machine generated and likely contains errors.

Ben: We're excited to have Ozzie Gooen with us today.

Ozzie: Excited to be here.

Divia: Thanks so much for coming.

Ben: Let me briefly introduce you for our listeners who are unfamiliar. Ozzie is the president of the Quantified Uncertainty Research Institute, a research organization dedicated to advancing forecasting and epistemics to improve the future of humanity. He's previously worked at the Future of Humanity Institute, appropriately enough, and might be most well known for creating Guesstimate, a wonderful spreadsheet-like tool for handling distributions and uncertainty. It is still one of my favorite apps in the tool for thought space. I still use it a lot. So Ozzie, welcome to the podcast.

Ozzie: Yeah, thank you.

Ben: So we have a lot of different topics that we were interested in covering with you today. One that I just wanted to start off with was, I think of you as a real, true utilitarian, a believer in the ethos and philosophy. Do you, would you agree with that kind of categorization?

Ozzie: I'd say in many ways, I believe a lot of the tenets of utilitarianism, typically quite a bit more so than other people. That said, that doesn't mean I take extreme actions in the way that people may expect for a pseudo-utilitarian. I think I am much more normal than you may be thinking.

Ben: Yeah, it's a great caveat. What was your intellectual journey towards utilitarianism? How long would you say that you've held this ethos?

Ozzie: When I got to college, I had a conversation with someone about some moral topics, and they said, you sound a lot like a utilitarian. So I just researched the phrase, and I was like, yeah, that sounds great. So I think during high school, I just had kind of similar beliefs. Before that, I liked maximizing things. It seemed like I wanted to try to do more good than less good. And then later, I discovered effective altruism when I was in college, and then have kind of been into that area since.

Divia: So since you said, and I certainly believe you, that people stereotype of utilitarian, like you're more normal than that. I'm curious when somebody said you're utilitarian and you looked it up, if you encountered a lot of the things that other people might think of as like, gotcha, like that's why utilitarianism doesn't make sense and how you thought about that at the time.

Ozzie: I mean, this was probably 2008 or something, so there wasn't that much of a known community of utilitarians. People kind of talk about it in the abstract. So, yeah, I mean, I think there are just some arguments that people assume are ridiculous that utilitarians have to take, but there wasn't much of a notion of what specific utilitarians would be like.

Divia: I was assigned to write some essay about utilitarianism where we had to talk about like, okay, but then why don't you take the guy's organs? Or do you take the guy's organs and use the bestiality of all the other people? Like, I don't know, did these things bother you? Or you just like, well, no, and utilitarians wouldn't do that anyway, or.

Ozzie: I think at that point, some of the utilitarian blogs kind of existed, so when I started looking into it, they gave answers I thought were reasonable. I didn't take philosophy classes in college, I studied engineering, and most of

Divia: Okay.

Ozzie: my colleagues were much from a scientific paradigm. So they definitely didn't introduce that many questions like that.

Ben: Your colleagues who were also in the engineering and STEM discipline, would you describe many of them as having utilitarian beliefs?

Ozzie: We had a small squad in our college, so we made something called, sorry, future tech, or more specifically AI and future tech, so you could be in the top of the alphabetic ranking.

Ozzie: So that was kind of like an EA club in 2008 to 2010 or 11 at Harvey Mudd College. We had a few people who were pretty utilitarian leaning. That said, I think the rest haven't quite gone on to AI safety or the obvious utilitarian things. I think they've taken more regular lives, which, yeah. is good for them, but there were a few of us with that side. Most of the college didn't seem incredibly, you know, they were more engineering majors and things like that. They were practically minded. I think most people just haven't thought about morality. Maybe people did in the 1900s. I'm curious how many even like college students, I mean, even philosophy students are that deep into it these days in undergrad. My impression is that a lot of people just haven't thought about this stuff too much.

Divia: Seems right.

Ben: I did wanna ask more, though, on the paradox that Divya brought up. I feel like utilitarianism, in particular, for philosophical stances, tends to get more of these paradoxes thrown at it in part because it is more of like a coherent is the wrong word, but in some ways it is more specific.

Divia: yeah.

Ben: Yeah, exactly, it's more specified. And so like of those, like, how do you, I'm just curious one on the like Oregon donation paradox, like do you kill the person to save the five lives from the Oregon's? What is kind of your stance and also how do you relate to the general question of like biting some of these bullets when it comes to utilitarianism?

Ozzie: Yeah, so there's a whole lot here. I think, I've been kind of treating it as like, utilitarianism isn't probably a great starter morality, right? Like if I'm gonna take someone who's not gonna think about it that much and are just gonna go off and start doing crazy, you know, actions in the world, I probably won't give them utilitarianism.

Ben: Why not?

Ozzie: Well, because, you know, it says that it's not like, absolutely the end of the world if you lie, or like, deceive other people. Like, you have to think through, why you shouldn't in practice, right? But there are a lot of people who, it may be very advantageous for you to say, if you lie or cheat or steal, you'll go to hell for eternity, right?

Ben: Mm-hmm. Mm-hmm.

Ozzie: Like sure, that's like a great incentive to, if you're scared about what this person is gonna do, that's just like a better thing to tell them in order to get them to never ever do those things, or like to try

Ben: Right.

Ozzie: not to. As opposed to a value system where you say like, look, you're trying to maximize this broad agenda. And there are a lot of trade-offs when doing that, in which case they could make a lot of really stupid moves with those things. So it probably takes, I think it's a broader class of intellectual power tools that there are these, like a bunch of different techniques that require sophistication in order to use.

Ben: Mm-hmm.

Ozzie: But if you are reasonable, if you can use them well, then there's a pretty high ceiling of what you could get to.

Ben: Gotcha,

Divia: Do you have

Ben: so

Divia: examples?

Ben: something like... Oh, yeah.

Ozzie: Yeah, I mean examples of power tools would be things like data-driven decision-making. If you use the wrong data, then you're going to just do really stupid things. Systematized evaluations, so just like rankings of how good things are. There's just like a lot of ways to mess that up if you don't know what you're doing. First principles of thinking, like almost anything that's not just like kind of do the default thing that people do. Yeah, like you're going, there's going to be ways to mess it up. It's incredibly hard to give people who may be very naive. a small set of rules of like, here's a cool thing that you could do that could get you pretty far and like have that be very resistant to them potentially messing it up if they want to, right? Or like if they're just not going to be that careful and knowledgeable about it. I think similarly like when it comes to like building physical things in the world, if you want an electrician to work with your house, you know, it's understood that like power and electricity are like powerful things to have, but you probably need a license in order to be able to do things with them. Otherwise, you'll just kind of shoot yourself in the foot and burn your house down. Similarly, I think things like utilitarianism or data-driven decision making or a lot of things like that. There are a lot of ways to mess them up if you're not kind of careful. Unfortunately, you don't really have any great license. Like you don't have a utilitarian license. So like, oh, you know, you've gone through this agenda. Like we're pretty sure that you could handle, handle being able to do things, like handle how estimates actually work. You're not dramatically overconfident. And therefore you actually have the There was something called, sorry, I'm blanking the name of it. I think government house utilitarianism, which is the idea that government officials should be utilitarian, but they don't tell other people about utilitarianism because

Ben: Mmm.

Ozzie: they'll probably mess it up. Similarly, one major reason for religion being promoted is because it gets people to be more moral in a lot of cases. It may be wrong, but at least if people believe that they'll go to hell for eternity if they mess up, then... that is like a motivational factor. So in that sense, I think that there is like an existing understanding that some of these things can definitely be messed up by a lot of people if they aren't like very nuanced about it. But that doesn't mean that like, but not true, right? There's a very different question of like, do we actually think that this is kind of correct thing? And like, how high is the ceiling? And like, is it possible that people with some sophistication can do a pretty good job when using these? And I think the answer to that is positive.

Ben: I love this thread and what it brings up. In particular, I'm curious, all right, if you think utilitarianism is a philosophy that has a very high ceiling, like you can do a lot of potential good with it, but it also has fewer constraints, you can shoot your foot off with it as well, metaphorically. Do you have a preferred philosophy for the starters? Like what's the starter pack philosophy you'd recommend to someone?

Ozzie: I think generally, this is a very complicated question. I think some of this, you can imagine how far people would want to go on it. Hypothetically, if we were just optimizing for what would make the best world, potentially, you could imagine one extreme where we don't care about epistemics and we're willing to craft religion X, which is a religion that if people believed would have the best outcomes. So we just like, engineered heaven and definition of heaven and hell in order to get people to act exactly how we would probably want I would be pretty hesitant to you know, be to do that so I Think my intuition is to be honest with people and not play this like two-sided thing and say that like generally we Like I believe that utilitarianism is the best approximation that we have now

Ben: Mm-hmm.

Ozzie: However, you should be pretty uncertain before you do anything dramatic in the world. And generally, if you aren't super sophisticated, don't pretend you are. Just play it much safer. And playing it safe means just generally doing the things that society generally deems as pretty good to do. There's also a question of who you think your quote, quote, epistemic superiors are in different

Ben: Mm.

Ozzie: domains. Just stay very humble and... try to figure out people who you're pretty sure know more than you in specific areas, and then defer to them in those areas. And hopefully those are pretty reasonable people, not crazy people.

Ben: What are the heuristics that they should employ for understanding whether or not they are sophisticated enough to use utilitarianism, so to speak?

Ozzie: That's a good question. I mean, I think whether you use utilitarianism or not, it's generally a good idea to be a calibrated intellectual. And there are definitely a whole lot of intellectuals aren't calibrated that are dramatically overconfident. So getting a good intuition for who the overconfident people are and why they're overconfident is a pretty easy thing to recommend. For that, I do recommend the forecasting literature as a starting point so you have a good impression of just how overconfident a lot of our leading intellectuals are. My impression is that the situation is pretty poor. I hope that like 100 years from now when people look back on this stage, they'll be incredibly disappointed by just how much of a shit show we have for leading intellectuals. I think across the board, we have a lot of people who really don't represent great epistemic principles and norms.

Ben: Right.

Ozzie: But there are few people, of course, who do a bit better. Scott Alexander in our community, I think, is more calibrated than a lot of leading figures.

Ben: This also, that's really helpful for seeing how your work at query and your work on tools for thought in the forecasting estimation space all fits together with your worldview and utilitarian philosophy. Something like, and correct me by the way if I'm wrong on this, but I want to make some kind of story where it's like, okay, utilitarianism is useful provided that you are a well calibrated good thinker. In order to do that, you should ensure that you are calibrated and using tools and computer-aided support for calibration and thinking. Is that the story in some sense?

Ozzie: Yeah, I think there are a lot of things like that. I'll also flag that from the forecasting literature, we have an idea of what tools we could put in place to make sure that people actually make good judgments.

Ben: Mm-hmm.

Ozzie: A lot of those tools are things like, you wanna aggregate opinions between people who have a track record of quality. You also wanna put things into incentive systems where people will incentivize to be accurate, not to be enjoyable. So those are things that hypothetically, we should really be trying as hard as possible to use for all of our key decision making. including about like utilitarianism. Most people won't do that. Like a lot of people really do prefer having their own take on things and then doing gigantic actions in the world based on their own take, as opposed to trying to listen to panels of like things that are well-run. For example, in the FTX case, what he was doing is like very, very far from what I think any forecasting community would have been okay with. Maybe I should flag here that when it comes to questions like, should you be honest with people? I think that there are a lot of different ways of looking at this. One is that often you have local games to play. And in those local games, you could have different decision functions than in the global game.

Ben: Mm.

Ozzie: So there are reasons why you may be trying to maximize EV, expected value, in the big game. But there may be short games that if your decision function is different, if you hard code your decision function to instead be like, I'm always gonna tell the truth and I'm always gonna do a few other things, that decision function has higher expected value than the decision function that says, I'm going to maximize expected value for everything. This is very similar to the Newcombs problem, where basically that's a situation where you kind of wanna use a different, it's pretty easy to engineer different thought experiments so that different decision functions do better, right? You

Divia: I

Ozzie: could

Divia: mean,

Ozzie: get pretty wacky with this, but yeah.

Divia: would you say this is sort of like what people mean when they talk about act versus rule utilitarianism?

Ozzie: It depends a bit on the specifics. Some people would say that those are ends, and then some people would say that they're means. I would say that the way that I'm describing it is that rule utilitarianism makes very good means, but not very good ends. So it makes a lot of sense, practically speaking, that the people around me trust me. That's a super useful thing in the world for me to eventually obtain my ends. So I'm gonna do this as a pragmatic step in doing that. And very similarly, I think contracts are amazing. So it would be kind of absurd to imagine a bunch of utilitarians who will lie, cheat, and steal for their purpose, but all have slightly different purposes, trying to coordinate with each other and having a terrible time because they just can't be honest with each other. That, I think, is a naive understanding of how utilitarians work. Like, utilitarians just can't do good things for each other without maximizing this bigger variable. Every decision that they make is just going to be. thought of as a narrow thing, so they're happy to cheat to get a buck, even though in the long term it'll hurt them. Practically speaking, I think that if you put a bunch of utilitarians together and they were kind of reasonable, they would coordinate a whole lot between each other to have a whole lot of agreements and hopefully hard-coded things. That would be, in a way, a contrast to a It's like saying that like... I'm going to force myself in the future to obey this other decision function that like I cannot do these things anymore But the result of that is I get we get these benefits and now collectively we could coordinate much better

Divia: This is like local daintology for instrumental reasons.

Ozzie: Exactly. But that seems great. Like I think that we could go very far in this dimension. I think contracts are awesome. Hypothetically, like one thing that we could do that I'd be excited about, although obviously it would be controversial, would be things like if I had an AI bot in the future that tracks everything that I do, it could tell you if I'm lying. So hypothetically, I can make a financial agreement that says if I lie at any point, you know, I'll lose all my money. All my money will go to a charity I don't like. And then it could have my AI bot tracking me the whole time. This is a case of like, if I did this, in many ways it's like very stupid because this could only hurt me. At the same time, it would allow other people to trust me much better. So the future I would like to see is one where we have a lot more things like this. Of course, there are a lot of ways that that could go wrong. This is like a big power tool. So this is like dangerous stuff, but you know.

Ben: Right, making certain pre-commitments so that people can evaluate you as more trusted or more credible in various ways. This definitely, I see some ties here with your broader work on like estimation and evaluation, something like trying to create engineering systems to reduce the uncertainty about the potential actions that a person or institution could take.

Ozzie: Yeah, I think. So our bigger agenda is, yeah, I call it advanced evaluation systems. The broad question is like, can we, we wanna be able to estimate things at scale for very cheaply. Like we want some general purpose patterns to estimate general purpose things

Ben: Mm-hmm.

Ozzie: at scale, like just very cheaply. And then we wanna apply, we wanna figure out how much we could apply those patterns to solve a lot of other world problems. My impression is that a lot of global coordination and a lot of like key problems that we have kind of resolve to coordination problems. And that can be reduced to estimation problems. So if you can

Ben: Say

Ozzie: solve.

Ben: more about that if you would.

Divia: Yeah, can you maybe give an example of one?

Ozzie: No, of course. Sorry. It's just a comp. I'm going through. So for example, if we always had very precise estimates of the value of everything, I think that there's a whole lot we could do with that. So for example, when politicians are trying to promote a specific bill, the bill may be 800 pages. Typically, a lot of the bill is negative expected value for the populace. If you knew the expected value of every single sentence to that bill, you could just kind of optimize it until it's like pretty good. Right now we have a lot of frustration because people don't trust each other that much. So we don't have a good, like, one reason why people don't trust each other is because we don't have good estimates in a lot of cases. Like people will provide estimates that are very misleading or wrong. Yeah.

Divia: So you have a prediction that a lot of these bills with a bunch of things that are, I think, anyone who was sort of trying to do a reasonable analysis would conclude are not super socially valuable for the money. You think that those bills would be a lot less likely to be passed if people could put the 800 page bill into some thing that parsed it and said, this would be the social value of this?

Ozzie: I think basically the more transparency could go on in terms of the benefit of these things, like that generally corresponds to better decisions being made. So right now there's a little bit of transparency, so which allows, my impression is that the transparency that we have now means that politicians are able to make some actions that are kind of better and we're able to see which politicians did which bills, and that gives us some information, but we really don't have as much information as we could. Like these bills are so big, it's very hard to understand them. And then for like, people don't trust each other. So if one think tank comes along saying this bill is very valuable, other people won't even trust what that think tank says.

Divia: I see. So some of it is having whatever people are using to evaluate the value of the bill be something that was objective enough, checkable enough that people who disagreed with each other about a lot of things would both put a lot of stock in that estimate.

Ozzie: Yep. So basically right now it takes a lot of evaluation work for people to have good understandings of how good and bad things are. And with evaluation work that we currently put into it, they have a pretty mediocre understanding. I think that like if we're able to do a much better job, then people would have a much better understanding and then correspondingly people would make better decisions.

Ben: There's a way in which this comes across as like a real instantiation of a Mistake theory view on where people are going wrong and this is a reference to the kind of the frames of mistake theory and conflict theory and Just saying this also for benefit of people who might not be familiar with it like mistake theory being something like okay Well, we would have these better bills if people had much better estimates if people had more knowledge and understanding of all the implications of it And then maybe the opposite lens is something like conflict theory, like, well, this cause it people we have bad bills or something because they're competing groups, lobbying groups, there's competing interests. And at the end of the day, it doesn't like it's one group winning out over another. Do you think that in a conflict theory view, estimation theories, the systems would still help? And or is that like some kind of explanation for why they're not adopted now or why there doesn't seem to be more like? push towards these kind of trusted systems.

Ozzie: So in an extreme view of conflict theory, the US as a whole or the world as a whole is about maximizing. It's doing a decent job maximizing its ability to coordinate. I don't see that as a viable thing. To me, it seems like people are doing such a terrible job at coordination globally. There's just so many things that seem like they could be great if people chipped in enough money to fund them, they understood well enough. I think people don't understand how big of a deal x-risks are. but it is in their best interest to do so. And then if they do so, generally I think, yeah, I don't feel like we're coordinating. Even among groups like the Democrats and Republicans, if they coordinated better, there are probably much better, better optimal alternatives.

Ben: Right.

Ozzie: So I think it would take a very cynical view to say that we're coordinating at optimal amounts in the world and no amount of more sophistication could help us. Basically my view is that we have a very long, road to go. Like the ceiling to being able to coordinate better is like way higher than it is now.

Ben: Mm.

Ozzie: Coordination among 8 billion people is like very tough. We're very far away from doing that with intelligence that we have and it's like incredibly costly right now to send information to different people to like for different people to learn about each other to trust each other. All these things are like very very difficult. So we should kind of expect that people will have a lot of trouble coordinating on a big scale. But if we could then we'd grow a lot. Another example of this is in government, a lot of the challenge of government is a complexity of legal terms. So bills are very big. It's very complicated to deal with all of that. If you could reduce a lot of that complication, my guess is that that's not like, I think this is more a mistake theory

Ben: Mm-hmm.

Ozzie: that we just wanna do better in these areas than conflict theory of, oh, it's actually a good thing, kind of, or it's like Pareto optimal. for bills to be like this monolithic and annoying to deal with. I think we could just do quite a bit better. And

Ben: Civilizational

Ozzie: there's a qu-

Ben: inadequacy when it comes to cooperation and coordination.

Ozzie: Yeah, like if you think, if you imagine replacing the 8 billion people that we have with people who are a thousand IQ or like AGI's that were incredibly intelligent, do you really think that the world would be so messy? Like are you like, yes, they're optimizing, but this is actually the optimal world? I think that that's an intense position.

Divia: Yeah, so it's sort of like you, you're part of how you think about mistake theory versus conflict theories. You're running some sanity check. That's like, does it seem like there's a lot of value being left on the table and yours comes up like, yes, obviously it's not subtle. And so while you're not necessarily saying that there isn't any conflict theory component, you're like, but the mistake theory component is massive.

Ozzie: Yep. Yeah.

Ben: Yeah, makes sense.

Ozzie: I think a lot of people would agree with that too. Like conflict theory is definitely a thing. There are many cases where there are zero-sum games, but there are also many cases where there are potential positive-sum games.

Ben: want to jump around a little bit and I expect we'll be coming back to this topic as well but since we're kind of talking about governance and some of the ways that like a mistake theory view informs that and the potential for improvements there I know that you've recently joined up with another effective altruist member Julia Wise in thinking through potential improvements or reforms to the EA landscape and this seems like a really interesting almost local example of the theory you're talking about here of like, okay, well, maybe are there better ways to coordinate and cooperate just among like a local community of people? I'm curious, are there like immediate things that you're excited about? Or like, how are you kind of thinking about this as a coordination problem?

Ozzie: That's a good question. It's complicated, as is true for a lot of things.

Ben: Totally.

Ozzie: So on one hand, so what my organization, the Quantified Uncertainty Research Institute is doing is we're trying to provide specific tools to help coordination within this community, right? So right now, working on relative values application, the idea is that it would be nice to have kind of specified utility functions for how valuable everything is. And my guess is that as we get this information, there are ways that that could be pretty useful to- making everyone more aligned with each other and pushing things in a good direction. That said, this is kind of an ambitious, more like futuristic goal. I think practically speaking, the obvious stuff I'd like to see in effective altruism probably looks more like just following the best practices that we have that exist in other areas. So we probably need to just do a very good job before we start doing a super cutting edge job,

Ben: Mmm.

Ozzie: so to speak. And what a good job looks like, So, at least the way that I look at it is that a lot of EA's problems are bureaucracy problems. In fairness, I think a lot of global problems are bureaucracy problems. The best bureaucracies that we have are probably sizable companies. So there's one question, which is that where are we doing a good and bad job compared to what large companies would be aiming for? It is complicated, but one way to look at it is that in EA right now, we have a collection of tiny little nonprofits, and we're trying to coordinate this mesh of nonprofits with a few funders. Right now, it's pretty difficult to do that without any super great infrastructure around it. So there are questions about how do we take the best practices that bigger organizations have. and make sure that we have something that's hypothetically very flexible and also in the future decently well-trained and have infrastructure that is quite a bit more powerful. A lot of this comes to maturation. A lot of startups, when they're young and very ideological, they typically take other people who are ideological with them. And then you have a question, which is that when at some point they have to mature. At some point, they go from 10 people or 30 people to 100 or 1000. And in that case, it needs to be some professional management brought in. Now, in many cases that destroys the organization and they kind of like lose what made them or the, um, the new management comes in and just does a terrible job, but in some cases, um, they're able to make it work. And in those cases, that typically is what leads to like the very good companies,

Ben: And

Ozzie: right?

Ben: so you see EA as being in this transition period, something like it needs to move from startup mode to big company mode.

Ozzie: Hopefully it doesn't have the connotations that people used to when you say a big company.

Ben: Gotcha,

Ozzie: But I think

Ben: yeah.

Ozzie: generally, like, we need to, you know,

Ben: larger, more mature organization.

Ozzie: we need to become more mature. I think there's a lot of that. There is a different lens to it, which is looking at EA as a social movement and then comparing it to other previous social movements. In that case, I'll flag that social movements often don't last incredibly long, and often they are very frustrating to do. Often you have, like, different groups of people who... It's just not super fun sometimes to coordinate among a group of 100 people or 1,000 people in kind of messy structures because people are always going to disagree with each other. And then you have government questions or governance questions to care about, making sure that people stay on the same page and agree to coordinate, as opposed to just losing hope very quickly or having disagreements about, to them, what seems like... What to them seems like gigantic, but often in the scheme of things winds up being kind of small. So being able to keep this group kind of focused and motivated, that's often a lot of work. I think it's very easy to give up midway. And unfortunately, I think after the FTX debacle, things have felt a lot less fun for a lot of people. So right now we're kind of seeing if, I think some people are just kind of throwing their hands up and they just don't wanna have to deal with it. And we'll see... what stays and how much enthusiasm there is to really double down. I think that there's a lot to work with still. It kind of depends on what community members feel like. Also another challenge now is that AI safety is becoming a much more intense topic. So more people are moving there. So there's probably some trade-offs between how many senior people should we have trying to figure out what an EA community is per se, versus just like... focused specifically on AI safety organizations, and then within AI safety, should we, like what kind of coordination can we have? Is this going to be like a whole bunch of dispersed organizations that can't do much with each other? Or is it going to be more of like an agentic cluster that could like take unilateral or take large scale actions as part of one decision making process?

Ben: It certainly sounds like you have a... preference or an intuition For the latter, but maybe like more specifically something like a In the classic trade-off between decentralized for centralized structures you Think it would be good at least on the margin if things moved more Centralized is that seem right?

Ozzie: So this is a bit of a nuanced topic, but generally, I talk about a term, justified trust. So the more justified trust you're able to have in agencies that are powerful, the better. Like, generally, I think the step that we want is like we want agencies that we have justified trust in, and then we want to give those agencies quite a bit of flexibility and power. That said, if we don't have justified trust in them. If you have unjustified trust, then that's a horrible idea. This is like another power tool that is amazing when you're able to have it work without totally messing up. But if you're worried about it messing up, then you can't play this card.

Ben: So create really powerful estimation evaluation systems so that we can have organizations that have the level of justified trust necessary to be very powerful and agentic.

Ozzie: That's definitely one, yeah. Similarly, I think with the US government as an example,

Ben: Mm-hmm.

Ozzie: I think that if we basically give up, like, libertarians kind of give up on the idea of government and say, like, we can't have justified trust in government is always gonna be a mistake. And if you take that route, then yeah, the government can't do anything because it doesn't exist. On the other hand, if you have a gigantic government that you can't trust, or you can't, then yeah, that's like, you're rigging with fire. But... If you're able to have a government that you can trust, this may require kind of extreme measures. Like this is kind of an empirical question of like what we need to make this. And my guess is that we should be much more intense about forcing our government to be trustworthy, right? That may probably require more procedures than we have now. Right now, I think most parties kind of have unjustified trust in that party's leadership in a really harmful way. But hypothetically, if they just were a lot less trusting, unless they were like really proven. of their own leadership, then they could really force their leadership to be better. And then if you're able to have leadership that you have justified trust in.

Divia: could you say a little more about how that would look? Like,

Ozzie: Yeah. Yeah.

Divia: let's say, you know, I imagine all of the major officials, either Democrats or Republicans, like having a crisis of faith, being like, okay, we have too much trust in our leaders. We're going to only try to trust them as much as they deserve. And you're like, and then they could force them. So how does that part work?

Ozzie: So this, yeah, there's a lot you could do. One big challenge is that a lot of what you would do would look bad for you in the short term. So imagine if there were prediction markets on, like scandal markets on each politician messing up at every point in time. Like we kind of know like a prediction for how likely every politician is to have an affair or to have other scandals. I think things like that could be pretty indicative of what we could expect. If we were being a bit more intense about it, political groups could have different types of surveillance on the officials to really triple check to make sure that they're not doing anything dubious. And I think a lot of scandals really should just not be possible. Like in Bill Clinton's example, if hypothetically he

Ben: Thank

Ozzie: made

Ben: you.

Ozzie: some deal that said, if I ever do this type of scandal, all my money will be lost and donated to groups I don't like or that will try to sue me indefinitely. Like there are ways that we could have done this to like really force his hand. and make sure that he's not going to do anything like that. Hypothetically, the most surveilled people in the world should really be the leaders. They should go under quite a bit of extra scrutiny. You could also imagine third parties that we trust in order to really just monitor the

Divia: Okay,

Ozzie: people very much on the top. Yeah.

Divia: so it's something like if people insisted on monitoring and commitment devices, then that's one way you see that leadership could have more justified trust.

Ben: Do you think current leadership within a social community, let me, obviously EA is the one that comes to mind here, should do this? Like would you suggest that like CEA board have pre-commitments or surveillance of themselves for, and among top leadership?

Ozzie: So one of the big challenges with surveillance as I talk about it is some of it is pretty novel and it probably would take a while, it would take work to get correct, right? So, you know, it's like, it's a novel thing that's gonna take experimentation and effort. That said, I imagine over time, like there are things like that that seem pretty good to me. There are probably things that we could have done in the case of FTX for that leadership to prove that it wasn't doing things shady. Like that doesn't seem like it should have been an incredibly hard problem if we were really like on their case.. But so as a different question for you, so it seems like you personally for sure have a lot of appetite for these sort of surveillance solutions and pre-commitments and things like that. And also you're sort of pointing at like there's a. practical or like a technological gap. Like it hasn't been used very much. It hasn't been ironed out. But some of these things people could do. Like I'm guessing you don't look at the, I don't know, current politicians and think, well, you know, there's a technological gap, but they're sort of doing the best with the tools they have. And so I guess I do wonder about that part. Like you could

Ozzie: Yeah.

Divia: make it, certainly make it cheaper to do these things by having more tools, more affordances, and maybe over time that would make it more normal. But is there something where Because when I try to imagine it, I'm like, okay, but then there's the part where people would have to want to do it And that's the part I have trouble imagining

Ozzie: it's definitely gonna take steps for us to get from here to there. I think right now we don't have as candid a culture as I would like us to have. I think that there's definitely a very big spectrum between how candid different organizations are. And most people are in one camp. When it comes to powerful people in these types of positions, typically they have groups of people around them where it's very convenient for those groups to not question the leadership that much because they're trying to help the leadership. So in the case of the Democratic, party or the Republican party, both sides, like all of the people in the top power structure are often kind of incentivized to help out other people in the power structure. So unless

Ben: Thank you.

Ozzie: they have like, unless people really try to distrust them for some reason, they don't have much of an incentive to actually set up institutions like this. But I think that if a lot of the voter base understood that they should trust politicians less, like there are ways that, if people generally, required more verification before trusting each other. I think some of that would happen. But the way that things are going right now is that any political side only needs one half of the electorate to be voted in, and it's pretty easy to get at least one half to...

Divia: Does it come back to voting reform? Is voting reform on your list?

Ozzie: But arguably, we could do similar things on smaller orgs. But it is, it's like a change, right? This is a set of things that would be cool, but it does work for people who appreciate how little they should trust each other and how important it is to make sure that the people in power have extra attention on them. And typically the people in power try to not make that happen, so.

Ben: Yeah, I guess something interesting that comes up on this for me is when I think about it as like something around boundaries. Like I imagine if I were a politician in this point and somebody's like, all right, you should surveil yourself 24-7 to prove that you are not taking bribes. Outside of the weirdness thing, because obviously we have a high quotient for tolerance for weirdness, there's some way in which I'm like, oh, but I don't know, this is like some kind of invasion of my... privacy, the boundaries I consider my space. And I think one thing that I'm noting in a theme in this specific, obviously, many, a lot more nuances needed proposal, but the other things as well, like relying upon outside reasoning systems, outside evaluation systems, is something where you're making the case where it's like, no, actually a lot of our concepts of boundaries or something is wrong, or it's certainly not like, Efficiency maximizing or it doesn't work well in the like world of today. Is that is that fair by the way? I wish I hadn't used the word boundaries because I'm a little bit like I don't

Ozzie: No.

Ben: know

Ozzie: True.

Ben: that seems loaded in a positive sense where I'm like, I don't know it's some way of like what's the right kind of like apportioning of Of like decision-making and like concepts like privacy

Ozzie: So, I mean, this kind of goes hand in hand with candidness and transparency, right? Like typically leadership want other people to know as little about themselves and what they're doing as possible, so that that way they could reveal only the information that's advantageous to them, right? Like that's a preferable world to generally

Ben: right.

Ozzie: be in. And the more information that you have to reveal, the worse, outside of that.

Ben: And you want to strike a new social contract. Not to make it too big or something, but it is something where it's like, no, for these things or something, we should have a different apportioning of where the public interest in your private space lies.

Ozzie: So effectively, I think that people in really important positions should effectively have less privacy.

Divia: they probably already do to a certain extent.

Ozzie: But hypothetically, there are different kinds of contracts and agreements that they could do to help make sure that they could be trusted by more people. And I think that like those things should be on the table. That doesn't mean that this information has to be public. For example, you could just have third party agencies that kind of monitor things and make sure that things are okay or in the future, maybe AIs and know people involved that just like kind of just keep an eye on things. There's a whole lot of tools in your toolkit. Again, one of them is just to have prediction markets on the quality of the decisions by these people or the likelihood that politicians will get into scandals. Robin Hansen had the idea that there are prediction markets on If the CEO is fired by each company, will the stock price go up or down and by how much? That is a kind of thing that CEOs aren't going to like in a lot of cases because like, why would they? That's just like, they could always, they're pretty good at spinning

Divia: some of them might like it though because if they thought, which is maybe why you think that this could eventually be a stable equilibrium, like the ones who think that who want the justified trust, who are confident that they are trustworthy and want other people to know, they would presumably want it.

Ozzie: There's a small fraction. Like a whole lot of the ones who should have the trust to justify trust are already pretty good at convincing people. That's how they got into this role. So it's really the ones that aren't super good at convincing people, but would be if they had this other information, like there'd be the ones that want it. But there's this kind of a narrow crowd. And typically, whenever you change an evaluation procedure, the people who are losing out on it complain a whole lot. And the people who like it don't give it that much support in comparison. So it typically is difficult to change the evaluation procedures. But this is like part of a long road of us continuously getting better and better evaluation procedures in order to become better and more and more coordinated, right? Like we already have some things like this. We have a lot of like, there are a lot of financial records that have to be public. Like people in positions of power do have boards that go over them, that check them in different ways. I'm kind of saying that we kind of continue this road. and do more and more things like this, so that we could continue to get in a much better position.

Divia: So you're saying that you do expect there to be resistance for a bunch of structural reasons But if you look at the past and you're like well in fact more and more of kind of the sort of things that you've been Talking about have been put in the place over the years So you don't think it's unrealistic that there could be even more in the future

Ozzie: Oh no, yeah, I think this is a very long road of intellectual infrastructure that we need to make. And we're on part of it, but we have a very far way to go. And I think the types of things that I'm talking about are things that would help us go along.

Ben: What's the lowest hanging fruit in your mind in this intellectual infrastructure journey that you'd like to see picked?

Ozzie: See you soon. So my work, of course, is in, the main work is in estimation systems, the scale, right? So like, can we use, how do we apply things like the forecasting infrastructures that we have today, and make much more interesting versions of them over the next like five to 30 years, and like use that to solve a lot of problems? One cool thing that you could do with that is that you could use it to figure out what to do with it. Right? So if you're very good at estimating, You could have a list of what are all these different things that we could estimate, and then for each one, how valuable would it be to estimate? So for example, the whole government is gigantic, it's a monolith, or there are many, it's huge. So trying to influence the government in the abstract is probably a bad idea, but if you're very sophisticated, you could figure out very specific parts of the government that are very good risk to reward ratio, and you could say this specific department is much more likely to be influenced by third party information, and actually, would be very valuable to be influenced. So we're gonna go after that section. Obviously, my guess is that from an expected value, like an effective altruism lens, improving the decisions of effective altruists will be like the best bang for your buck in the short term. And then even within that, there are questions about like how to best do that. My hope is that what I'd like to see, but I've realized that this is a bit too candid for some people. I think there's some like truth shock. that when people get certain amounts of information or truth, they're really just uncomfortable with it if they're not used to it. So one example of that is like, can we just have a list of how valuable everything is in effective altruism on some utility functions, or hopefully a set that represents different values? If we did that, some people would lose out on it, because like some people are able to, you know, right now people think that they have more status than such a list would say that they have. So there will be pushback. And then people really, some people really would find things stressful for different reasons. They're not used to it. But hypothetically, the more we're able to go in this direction, that could then tell us the next steps. So if you have a pretty decent utility function where you've used forecasters to determine what projects were useful in the past, then hypothetically, you could get a sense of, oh, maybe the estimation types of projects were very useful, or maybe the governance projects were kind of useful. So then you could start using this systematized thinking procedure to then take the corresponding best actions. And that includes like in estimation and governance.

Ben: Thank you.

Ozzie: But again, this is like very much the ambitious way of doing it. There's less ambitious takes that are more like let's just take kind of the more most reasonable people and like get them to take them pretty good like yeah solidly good options.

Ben: How do you anticipate handling deep value disagreements when it comes to valuation? For instance, some kind of split between Group A and Group B in terms of how they value a project because of underlying disagreements about even the concept of the value the project is producing.

Ozzie: So there's a lot of cool stuff to do in those cases, as long as you have people who use similar, I guess, like epistemic values. So if people are willing to lie and cheat in order to promote their values and they're harder to coordinate with, but if you have people who are reasonable and kind of honest, then there are a lot of ways you could do trade in those cases. I've been working on a relative alie- a relative values way of doing estimation, where you basically have people estimate, for any item in this huge list, how valuable is every item versus every other item? And I think what you find when, we haven't really done it at scale yet, but my impression is that a lot of people probably have a lot of agreement on the sub-clusters. So for example, people may disagree on how valuable AI safety is compared to bio risk work, but there may be a lot of agreement that between, within bio, these projects kind of follow this ranking, right? So I think that people with very different beliefs could still have agreements on a lot of these sub areas and then work together to try to do their best at making sure that there's local decision making is optimal, like you've optimized your local a maximum.

Ben: Mm.

Ozzie: And then when it comes to bigger global things, on one hand, there are cases where someone believes something that's kind of strange and they just go off and maximize it because it's in their best interest. and you help, hopefully you could help them, they could have their own utility estimate, function estimates that help inform them in how to do that. There are other cases where what two parties will do will fight each other, right? Like

Ben: Mm.

Ozzie: Democratic and Republican donors may just be directly canceling each other out. In those cases, if you're able to coordinate,

Ben: Mm.

Ozzie: then you get around that, right? Then you say, look, there are these issues that our interests cancel, so let's just not focus on those interests. issues mutually and find other types of things to do that aren't going to cancel out in this way. This is like a trading game. It is kind of embarrassing that, of course, we can't do this so this is an American society. That is one thing that pretty obviously seems like an improvement, a set of improvement potential. I'm not talking so much about the money spent on politics, but

Ben: Right.

Ozzie: more the attention and effort spent on politics. A lot of it is just kind of canceling out.

Ben: So build up really good estimates from the ground up for these different groups and then foster an environment where people can hold either through contracts, pre-commitments, whatever, can hold to the agreements such that the mutually wasteful red queen race style dynamics are alleviated. Is an estimation utopia?

Ozzie: Yeah, so estimation utopia is kind of the word that I have for this world where everything

Ben: Mm.

Ozzie: is parameter, like a lot of key items are parameterized and estimated, and the estimates are, we have justified trust in those estimates, they're good, and people know that they're good. And in that world, there's a lot of stuff that we could do very nicely. Another example is warfare. So warfare is often, like, it is often, for one thing, an example of when private interests trust Trump public interests, right? Because some interests do very well when war happens, and those interests are intensified to try to make war happen. Unfortunately, it seems like the ones that don't want war to happen don't seem to coordinate as well. Like we don't have gigantic public charities just trying to fight the war industrial complex. But hypothetically we could. Hypothetically, there's enough expected value on the table that if people could coordinate with each other to fund groups to fight. Yeah, to go against that, they could help ensure peace at a global scale, because in general, it's in the public's interest for there to be less warfare. Yeah.

Ben: I just have to make an aside because it keeps coming up when I think about this, which is I've been reading a little bit about the movement to formalize math and the way in which like for a long period of time or like, you know, at some point people realize that within this community like, all right, math was not actually on these foundational, fundamentally intellectually stable basis. And there was a whole long movement to formalize all of the axioms that had previously been just. kind of trusted without being investigated. And I keep thinking like, oh, maybe this is kind of what Ozzy's trying to do for social movements or society. Something about like formalizing everything like from the bottom up in terms of, not in terms of like logical mathematical precepts, but something more about like, well, formalizing it on the basis of the kind of Bayesian thinking representations, yeah.

Divia: Yeah, like a state of the art epistemic tools that we have.

Ozzie: I think from my perspective, basically, if you take a lot of empirical or enlightenment western canon and keep on scaling it with technology, there are a lot of ways to use that kind of thinking in order to just like, yeah, in order to make decisions in the best ways at scale, right? Like we kind of know a lot of good principles and we want to apply that to all the fuzzy stuff. Right now what I see is I see a lot of decisions in the world being made by gut judgments, by individuals and pet beliefs, that feels very far from doing anything that has been optimized. I'd also flag that there's probably a lot of shared infrastructure that we can make to try to do this type of thinking on scale. Basically, we want to take these high-level value judgments and we want to turn that into a kind of labor that we could heavily optimize. And that's going to mean that it's going to be very clear and specific. And then, yeah, hopefully industrial grade. But that's definitely, you know, it's just like going along the train. Some people hate this whole kind of paradigm. They're definitely people who hate, you know, the enlightenment way of thinking, who hate empiricism. And they're gonna disagree with a lot of this, right? But yeah.

Divia: Maybe this is a, I don't know if this is the sort of question that makes sense to ask, but do you know why you like enlightenment thinking and empiricism? Like if you imagine, you know, talking to somebody who's more skeptical and you've got all the way down to like, no, but I just want to scale the, scale the enlightenment thinking and the empiricism. And they're like, but why, what would you say?

Ozzie: So, yeah, I mean, this is a big question. At the end of the day, it does kind of bottom out in some intuitions about kind of reasonable ways of doing things. And I think at the end of the day, we'd have like different intuitions, and it's kind of hard to explain those intuitions. I could go into a bunch of examples or like arguments, or if there are like specific things that we disagree with, I could explain why I think that this type of paradigm makes sense in those cases. A lot of the alternative cases would be something like postmodernism. where, for example, there may be styles where you're not supposed to be super clear about what you think, or you're supposed to use more metaphorical thinking.

Divia: Okay, so I mean, I guess that's true. Certainly postmodernism is very popular. To me, the intellectual competitor that I respect the most that seems like potentially opposed is like Hayekian thinking. It's like central planning. I think you're like, okay, central planning is fine if we could just improve our ability at calculation and solve that. That's sort of what I hear you saying. And maybe that's not how you would put it. But I think I certainly sometimes have the intuition that. I don't know if it's really possible to like people can get better at calculating but I don't know if that means they can solve the calculation problem because it often does seem like there are a lot of corrupting forces that sort of Managed to get in there more the more legit like that There's a lot of benefit to having things be more legible But then if things are both legible and centralized they could get corrupt to maybe faster like that So I guess that's one place that I may be coming from on this

Ozzie: Yeah, so there's a lot here. I'd say, yeah, some of my thinking is very technocratic. It's like saying, like, how can we do the technocratic dream but actually well? Because the technocratic dream is, like, very easily to be corrupted. I'd agree with that. And, you know, it's, like, very much like a power tool. So if you have the wrong people use it, they'll be very wrong. So it's a question of, like, there are kind of different approaches. One is to say, like, let's give up on the technocratic dream. Let's, like, stop using all of these cool tools and go back to other types of ways of making decisions. I don't know exactly how great those ways are. Like my impression is that a lot of those ways are just either not having centralized ish groups. I don't know what they are. Like, it seems like gut judgments and like poetry and

Divia: My counterpoint is like I certainly don't have any quibbles with the like I think trying to quantify things and think about things systematically Like I'm not inclined to disagree with you there. I think it's the centralization piece is the part that gives me more pause So it's like it was a possible to sort of separate out those intuitions

Ozzie: I would say a lot of my thinking isn't super centralized. Like most of the stuff

Ozzie: like most of the estimations at scale is something where you don't really need much centralization. Like you could use large scale prediction markets and stuff like that to be kind of accurate. And when you have a so-called centralized figure, the centralized figure would be something like a hedge fund that has a specific contract to be accurate, and its hands would be tied from being able to do about anything other than just like provide accurate estimates.

Ozzie: I think that there are specific approaches that I haven't outlined yet that would help create that type of thing.

Divia: Mm-hmm.

Ozzie: Yeah, I do hope that people, like I said, that people are very skeptical of authority figures, so I think we would agree with that. There are questions about what ways that we could do to kind of ensure that we could have authorities to different levels of power. And then there's also questions of, do we even have options in the world as it is? There are some players that just have a lot of power. And we may not be able to reduce that a whole lot. So maybe there are things that we could do to at least try to oversee them better. So one example of this are the boards of the new AI safety or the AI companies. They may be in positions of huge amounts of authority. Yeah, it could be great to try to decentralize that, but I don't know if that's an option. So given that these things are kind of already there, how do we make sure that they go? Yeah.

Divia: Okay, I mean, that's true that I haven't heard any... I mean, I guess there's some people who say, well, make it open, like, try to encourage open source development. That's one idea I've heard for decentralizing it. People, there are a lot of objections that I think I'm very sympathetic to that. But that said, I think there are also a lot of proposals for further centralizing things, which I think are very much inside the Overton window. So I don't know, how do you feel about that sort of thing? I don't know.

Ozzie: Um, so it's complicated. On AI specifically, I think that some amounts of centralization, like I think in a lot of cases when things are very important, some amounts of centralization are very useful. However, um, there was a big difference between centralization that allows individuals to do things that, um, are their pet beliefs and in their self interests and, um, centralization where the people in charge, their hands are tied and they're not able to do anything outside of what is in the public's benefit. Um, and I think. there are ways that we could get more to the latter. And those are generally what we should be, I think things like that look kind of powerful.

Ben: Yeah, I want to go back to the concept you had of like, justified trust, because I think one way that I heard advocating for or being more okay with centralization relative to a Hayekian paradigm that you've espoused is something like, instead of having, for instance, the market or market forces and the kind of competition there as a check, you would be more okay with large groups provided there is some System that is developed that acts as a check on those groups like is that like I'm just I've been hearing a crux or something around the like well if he was a large enough if It was a powerful large capable group. It is better provided. There is something that can act as a Investigator auditor Shackles that bind the giant or something

Ozzie: So a lot of my interest is a bit orthogonal to the question

Ben: Mm.

Ozzie: of how big power figures do you want. A lot of it's more just like, how do we do estimation at scale? Which is kind of provocative in its own right, because

Ben: Mm-hmm.

Ozzie: that is very transparent and very honest in a way that people aren't used to. When it comes to power, for one thing, if the prediction markets of the future said, you should do things in a decentralized way, I'd be happy with that. If they say that you should do things in a centralized way, I would be happy with that, as long as we could kind of trust and have some justified trust in the prediction markets, which I think... that with the right work we should be able to. My guess is that if you do things using predictions, practically speaking, that would do a lot of the work that a centralized power would otherwise do. Like having a good list of just like how valuable everything is. One of the main things that typically centralized figures do is they kind of make up the utility function. It's kind of them to judge how valuable things are. So you kind of want to take that away from them and into a paradigm that we have a lot more faith in. Right? And then there are just a lot of ways to take away authority from groups that we may give control to, right? So one way is like kind of tying their hands, saying that like, they kind of need to be, sorry. It's complicated. So on one hand, you have like prediction markets. So, sorry, futurearchy is Robin Hansen's idea of using prediction markets to formally make a few, like big high level decisions.

Divia: conditional prediction markets in particular.

Ozzie: Yeah. But there's a big spectrum there of how we want to use similar tools in order to guide the government. So right now, the government, for example, cost-benefit analysis is kind of baked into different legislation. The US has to kind of at least pretend to justify cost-benefit analysis in order to get bills approved. If you do a whole, they don't like that. That's kind of like less agency that the government has, even though they have a lot of ability to do things. With that, right. So my guess is like in a world with a lot of estimations, it would be very obvious how to judge executives. Like the board would be able to see, like did this executive improve performance? Like exactly how valuable was this executive's work? And the executive themselves would basically be able to see like for everything that I do, this is how valuable it is. And if I do something that's not optimal, then people are gonna get upset with me. Like they'll be able to flag very quickly like what you did was not optimal. If you get very advanced, of course, what you could do is start tying incentives more directly to this. So it's not just a question of like, as an exec, are you hired or fired? It's much more that we have like some utility function of how good of a job you did. And your money that you make is very directly correlated with that function. Of course, you can get a little bit better than that. But you basically have a lot of ways to force the people in power, like force their hand in order to do what is in the interest the collective. Although hopefully you have a collective that's like a big collective as opposed to a narrow one that's able to more effectively do bad things.

Ozzie: So the specific point that I have is that if there would have been an agency with $1,000 to $400,000 per year that did nothing but try to find red flags about FTX and then present those flags to the public, just like a monitoring agency for this group that was like, the public was basically putting many billions dollars of investment. So it would make sense that the public would also be incentivized. or be interested to do some coordinated checking and monitoring and evaluation. I think that like, yeah.

Ben: That makes sense to me though. I have this skepticism when I think about the track record of those kinds of organizations in the past. Like I think of the example of the former notorious, maybe most notorious fraudster, maybe he's been supplanted at this point by SPF, but Bernie Madoff and the stories of how the people that were supposed to be investigating and auditing him ended up getting. hoodwinked multiple times in somewhat embarrassing ways. Like I remember one story about him like entertaining them in a specific office and make while like people were upstairs like trying to create the fraud and like some kind of ridiculous comic away. Another story I'll try to link the show notes. So I like hear you and I agree, it seems like there should be ways to do that, to like both investigate intensely and find it. But it it doesn't feel like we have great track records for those orgs now. What do you think is going to? change or how can we do it better? I suppose better technology for one.

Ozzie: I think it's hard to use anecdotal-ish evidence like that in order

Ben: Fair.

Ozzie: to make a bigger claim, because in many cases they are caught, and that's one reason why arguably, like maybe would have way more Bernie Madoffs if there wasn't that type of digging. So like maybe in Bernie Madoff's case, obviously he was able to get around it, but in other cases they weren't. In SBF's case, they definitely did a bunch of very sketchy things from what I understand. So early Alameda, it's public knowledge that like some employees were really not happy with what happened. Later on, like the fact that they didn't really keep books. The fact that they didn't really seem to have a board. The fact that they did things with investors that a lot of investors found very red flaggy. Like they would apparently send them deal sheets one day before that wouldn't allow them to do any due diligence before having to make a decision. So

Divia: Yeah, and also like with crypto, since we're talking a lot about technological improvements and I don't actually know the details with these tokens, but one of the promises with the crypto stuff is that crypto forensics should be able to sniff some of this out. And there are often reasons why people are like, okay, well, we can't find these coins and whatever, and they're like, well, this is, like, there's sort of some game of. various explanations for why the crypto forensics don't always turn up what they think. But I guess I am going to say I think that seems like one way an agency could have looked into it that isn't always possible on top of all the other red flags.

Ozzie: In terms of at least like failing with dignity, quote, quote, like, do we think that the public did a good job in trying to investigate this and was misled? Or do we think that the public just like, did a terrible job and like doing anything coordinated and just got hoodwinked super easily? I think we fall a lot more into the latter camp. Like, it doesn't seem like he had to try that much to deceive the public. It seemed amazingly pathetic how much he was able to get away with so easy. Like, he didn't have to disguise these things. So I think that like... Yeah, it's very possible that if there would have been much better measures in place, they would have been able to better deal with him. However, I think that does change. I think that's generally the direction that we want to go. We want to make it more difficult for someone to get away with things like that. And it's pretty clear that we weren't trying that hard collectively. Both with an effective altruism, I don't think that there is that much coordinated effort to research him and to do a lot of systematic digging before we kind of went to bed with him and like took, you know. that the EA community had on the line with this relationship. But then more generally, you'd think that within crypto, $8 billion at stake. Hypothetically, that community could have like self-funded some organization to do some digging. Like obviously it would be a lot of coordination work to do that self-funding in the existing world that we have. But I think in a more sophisticated, pleasant world, it would be, basically like bodies would be making good trades where they... money would go to groups that would do a good job representing lots of people. Right. And in this case, that would mean like doing more due diligence and evaluation and monitoring of FTX. In other cases, it would do that on a bigger scale. In crypto in general, it seems like this happens all the time. So it is kind of embarrassing that we don't have good agencies to oversee things.

Ben: Do you have forward-looking predictions? Are there groups right now where you're like, ugh, there's like a fair bit of systemic risk maybe with this group. I would love to see more evaluations, either in your local community or broader.

Ozzie: I mean, I think everywhere, just like period. We just generally don't have, people have a lot of faith in people that they're kind of close to or kind of in the same movement as them or something, often quite a bit too much. And there are a lot of places where they're scary. Generally, I expect Apple to do what's in the best interest of Apple. And I think with public companies, we kind of have a good understanding of what the risks are and how bad it gets and what their incentives are. In the cases of governments or charitable endeavors that can become scarier, especially when they start doing really scary things, one really big question mark are the board members who are in charge of the AI organizations. So OpenAI's board is kind of responsible for this incredibly important thing. And it's possible that once the windfall clause, or they don't call it that, but once the 10x thing hits, if OpenAI's vision is achieved and they're able to make AGI and kind of dominate the future funding of the whole world. The people kind of in charge of that formally are gonna be the OpenAI board members.

Ben: Mm-hmm.

Ozzie: And people,

Divia: Also, sorry, can

Ozzie: yeah.

Divia: we just, in case all our listeners don't know, I believe this, the way this works is there like a special type of like nonprofit hybrid where the investors can get up to a hundred times their investment back, but no more than that is that, did I get that right? Okay.

Ozzie: Something like that. I think different investors have different amounts if they get back.

Divia: I see.

Ozzie: Some of the more recent ones, maybe not quite a hundred. But like after a hundred X, I think all the money kind of goes to open AI.

Divia: Mm-hmm.

Ozzie: And the people in charge of open AI are the open AI board.

Divia: Right.

Ozzie: So this is like a very important group of people. Like basically whatever this group of people wants to do with the money, I think they basically can. Like it is a five, you know, it is a charity, which means that it's bound by the charity things. But I think that only matters in so far that they could get charity donations. They could kind of do whatever they want outside of that. And if they've made money this way, it's kind of up to them to do. So it's basically up to the board, and ultimately, and then the CEO to kind of decide what happens with the future of humanity's money, which is kind of one world that this could go in. And that's like an incredibly crazy position for a group that's been like unelected. Most people have no clue who these people are. Yeah, there's a question of like,

Ben: I'm looking right now at the board and I guess this will be another thing I link. But like, for instance, I did not know Adam D'Angelo, who I believe is the founder of Quora, is on the OpenAI board.

Ozzie: And there's a question of like, if there were one company that was responsible for one fifth of global GDP or something just ridiculous, what things can we have in place that would make us potentially trust the group at all?

Ben: Right.

Ozzie: Like, is there anything that we could do to try to make sure that the people in charge don't just do whatever their own self-interest says it would? Like, this is like a ridiculous situation. So I think that like, this is something that like, yes, we should be like really nervous about this. And like very curious if there's anything that we could do to like help make sure that this doesn't go haywire.

Divia: Okay, can I try to name what I'm, like how I, I'm not trying to think about how I would describe how you're thinking about this. Can I try to name it? Okay, so I think a thing I often, it seems like maybe a pattern of thought of yours, feel free to correct me, is that you sort of imagine various interest groups, like you'll say the public or like the American citizens or whatever. And then you kind of, I'm imagining that you imagine, okay, like assume that we're sort of, I were judging. that group as though it were a single person, as though that were like a friend of mine. Would I think their actions were reasonable? And then you sort of have like an impression about that. Often you think they're not. And then you see the sort of gap between like, what I think is reasonable if a person did this, but okay, fine, maybe it's like, you know, 200 million people, so it's harder. And you see that kind of like as an engineering problem to be solved with better tools for thought, better tools for like. for sharing information, better commitment devices, things like that. Does that seem sort of roughly right?

Ozzie: I mean, it is like a very broad infrastructure that you're talking about, that we kind of have this abstract notion of what the utility function is of a large group of people, which is something

Divia: Yeah.

Ozzie: that you could approximate, right? You say like,

Divia: Great.

Ozzie: what are different scenarios that are gonna be better and worse for this group of people, depending on what the group of people says, right? And then you have the question of like, how good of a job does that group of people seem to be doing on that unsaid utility function? And yeah, I'd argue that they're not, like I'd argue that we should be pushing for better. I think that like, there are a lot of tools that we have available to us. that we could use to allow groups of people to do better on their own utility functions. The word should is a messy word, the word like, how good should we expect people to do and stuff like that. I don't know how to say things like that, but I

Divia: Yeah,

Ozzie: could say like,

Divia: it's hard

Ozzie: yeah.

Divia: to talk about, for sure.

Ozzie: Yeah, yeah. But I think that with the resources that we have within any of our toolkits, there are things that we could do with some of those resources to get us into better positions.

Divia: Yeah, it definitely seems like you identify that gap as like, I don't know, that's like an area of opportunity that you want to work on.

Ozzie: Yeah.

Ben: And this ties into your current work, which maybe you could say a little bit more on, because you talked a little bit about it earlier, but it might be good to expand on relative value estimation. That's the current main focus of yourself in query.

Ozzie: Yeah, of course. Yeah, so query, Quantified Uncertainty Research Institute. The big picture is how do we make, or the big picture that I'm interested in is how do we make these very advanced estimation infrastructure systems and then apply them to many things? And there's a whole lot of details into what the engineering architecture of such a system should actually look like. Taking up a big step back, now all the rage is AI. So there's a big question of maybe we just want to ask language models every last thing, in which case you don't have to think much. more about world models and doing parameterized modeling. That's kind of like a different bag of worms. I'm more focused on how do we do it using technologies that kind of exist today. So yeah, then there's the questions of what that looks like. I think there are probably a bunch of innovations that we want in order to do a good job, in order for us to get from maybe a level six where we are now to a level 10, where we may want to be a few years from now. We've been working on some of those tools. So one of them. is estimation functions. So right now on existing forecasting platforms, people forecast binary decisions or binary events, and sometimes they forecast numbers. In some very specific cases, they'll forecast a few time series variables. So they'd say, what's the distribution like for these five points in time? We're interested in having forecasters write functions that then could express much more sophisticated forecasts. So a forecaster would basically express for any point in time in the next 20 years, this is what I think the GDP of every single country and state and region is gonna be. So this is something that you can only really encode using code. Like you have to express it using some like code and then we need infrastructure to make sure that that code actually gets aggregated so different forecasters could submit their code and then the code gets run whenever people want to understand what the forecast is for any collection of items. But hypothetically, this allows forecasters to express a much bigger space of ideas. So in the future, yeah, basically, we want to experiment with this. Now, this is hard. A lot of forecasters don't know how to write code that well. It definitely requires more sophistication and also more infrastructure. But it also does come with a lot more power. So in order to do that, we've been working on Squiggle, which is a programming language that runs on top of JavaScript. So it works very well with these browser workflows. It works well on websites. So Squiggle is kind of like Guestimate, the programming language. So it's kind of like a more powerful version of Guestimate. Of course, it's a programming language, so you don't get some of the UI they get with Guestimate. So it's like a different set of things. But getting this right has been taking a lot of time. But we have started kind of using it for forecasting. And we want to make that a much bigger thing, of course. But of course, this is a question of like, it is kind of on the cutting edge of forecasting infrastructure. Relative values specifically, sorry for the rant here, it's like kind of a lot of stuff. Relative values I think are one cool thing that you could start doing when you have programming functions like this. So when most people, it is kind of a hard idea for a lot of people, I think it is a bit nuanced. When people think about utility functions and valuing things, they typically think about doing it all on one scale. So you may

Divia: Yes.

Ozzie: value what a company does in terms of money, or you may value health interventions in terms of qualities. But if you want to estimate the value of very different types of things, then you really can't use any one unit to do so. Because when you convert things into that unit, it adds a huge amount of uncertainty. So if you wanted to estimate like how valuable is everything in long-termism on the same scale. It's not obvious what the skill or unit even is. We may be able to use a term like micro-dooms or micro-topias, which kind of tell us like one millionth of a chance of the world ending, that you'd be like diverting by this intervention. But those are pretty messy. So a better way of doing it, or like a more sophisticated way, although it's like more energy to do, is basically allowing people to express the different values for any two interventions. So like for any two interventions that you give me, I basically tell you the ratio of how much better one is compared to the other. And this is more work because now I have to do this for this n by n combination space.

Divia: Wait, sorry, so I'm, but you're imagining interventions of roughly the same type, but that like wouldn't be natural to describe with the same units.

Ozzie: So you could do that, and I think that's where a lot of the benefit is. But it would also allow you to do it on different clusters of things. So for example, within long-termism, you could say maybe within the Muri-style agenda, you could say this paper is probably four to 10 times as valuable as this paper. And this tweet is probably one to 1,000th as valuable as this paper, or something like that.

Divia: Okay.

Ozzie: So you could basically clarify, we have pretty decent understandings these things are compared to each other within this like cluster.

Divia: So you imagine mostly use it within a cluster. So it wouldn't be like how much is this Mary paper worth compared to? This I don't know like animal welfare bill

Ozzie: So I think that you would get both, but a lot of the focus would be on the clusters. So basically, what estimators would be doing 99% of the time is estimating things within clusters. But the moment you estimate the comparison between one item and one cluster and an item in a different cluster, then if you want to, you could use that to then calculate how valuable those clusters are compared to each other. And it will

Divia: Okay.

Ozzie: be very uncertain. It'll be

Divia: Yeah.

Ozzie: a very wide probability distribution.

Divia: Yeah. And I think this is sort of what you were saying before. And this is one reason that even though, of course, people have value differences, you're optimistic that there's still a lot of, I don't know, like fruit to be picked, if not low hanging, like maybe labor intensive, but not like philosophically complicated fruit to be picked from that. People often have a lot of agreement within the clusters and then you could have sort of like broad parameters and like people could put in their own values for between clusters, but within clusters, there may be more agreement.

Ozzie: Yeah, I think there's basically a whole lot of local optimizations to do.

Divia: Mm-hmm.

Ozzie: And that's often true. For example, in a lot of companies, it's very unclear. Like the company's net worth or market cap may be like an extrapolation of what its total earnings is going to be over its lifetime, which may only really take place like 10 to 50 years from now. So if you were to try to make all decisions based on what will maximize, like its expectation on that, it would just be like very, a lot of the, a lot of the items would go through the same uncertainties. So they'd be very uncertain. But it's a very, very safe bet that you just optimize for money this year. And then that's a pretty decent thing of optimizing money for 20 years from now. And then a lot of decisions are also just very localized. So you may have to choose between we have three candidates for a specific position, and we have to choose a best candidate. So we don't really need to judge how good each one of them is on this gigantic scale right now. We just have to figure out how good they are compared to each other. But as long as we're able to do a decent job with that, that's able to get us pretty far. So if we're able to do a decent job -

Divia: I mean, is that, but are you implicitly thinking like how good are they compared to each other at maximizing the luck? Like, I guess I'm a little bit confused about why that's easier or something, even though like of course- I don't know. Also, of course, it seems easier. Like, is the specific, because in some sense, like the way that I'm comparing the

Ozzie: Yeah.

Divia: candidates to each

Ozzie: Yeah.

Divia: other is I'm implicitly or explicitly doing some calculation about adding long-term value to the organization, right?

Ozzie: Hopefully, yeah.

Divia: Hopefully.

Ozzie: And the way that most people make this decision is that they don't try to put numbers on it. They just do the best in each item, in each cluster. And that works the best, right? Basically, I'm trying to figure out a way to prescribe numbers to all these things that could

Divia: Mm-hmm.

Ozzie: then very trivially be understood in terms of the global thing when you want to, but

Divia: This

Ozzie: you don't

Divia: is from

Ozzie: need to. Yeah.

Divia: this is I don't know if this is that relevant, but it's reminding me ages ago. I read that book, How to Measure Everything

Ozzie: Yeah,

Divia: or

Ozzie: yeah,

Ben: Mmm.

Divia: Anything, which

Ozzie: yeah.

Divia: I imagine you've read too.

Ozzie: A big fan.

Divia: Yeah, it reminds me a lot of that. And what I remember is this has been a long time since I read it. This is the book is like people have all these complaints about why measurement isn't perfect. But like is that really a reason not to at least try?

Ozzie: Yeah, I think in the world that I'm trying to aim for is one where we have estimates. I mean, he kind of gets to it. Like, we should have estimates of the value of estimating things, right? So, like, we do that first, and if it comes out that estimating things is negative, that could be for multiple reasons. One is that it's too hard. Another one is that it's, like, too politically unfavorable. Like, if you did it, it would look very bad for you, so you just you estimate that the potential choices are in fact very similar in value.

Ozzie: It could, yeah. Well, I mean, that's kind of like the value of information.

Divia: Yeah.

Ozzie: Like, it's just not that valuable to do the estimate for. So there's a lot of ways you do that. Of course, if you want to get more advanced, you have value functions that say, like, for spending five hours on this, this is about how much value you'll get. If you do 20 hours, this is about, so therefore, you should cut off at

Ben: Do you expect this kind of approach being a power tool, something like how there are super forecasters, that there would be super estimators who are using it, or is the anticipation to make a tool of thought that could be used by your average business analyst?

Ozzie: So I'm definitely not thinking about the average business analyst, maybe, I think. One term I've kind of liked using, although this is not a copyrighted thing, like super forecasters are copyrighted thing. But hypothetically, we want some really good forecasters who could also write code and do it in this more extensive way. So one term for that, hypothetically, is a super duper forecaster.

Divia: A non-copyrighted term, super

Ben: Hehehehehehe.

Divia: duper forecast.

Ben: Yeah, perfect,

Divia: I

Ben: you've

Divia: like it.

Ben: one upped them.

Ozzie: Hypothetically, what these procedures would be like. So obviously I like solving things in estimation terms. So one way to do that is by basically estimating which estimation technique will be the most effective. And there are a few ways of getting that to happen. One is like with contract mechanisms where we basically put out offers of like, who could estimate this big set of things for the cheapest? And then we have clients take that on. And then there's a kind of an empirical question about which specific structure is gonna be able to do estimating at scale the most cost effectively. My guess is that a lot of what this will look like are hedge fund like entities. So basically imagine a team of like five to 12 people with very different skill sets. Some of them are great at writing code. Some of them are great at doing like object investigations. Some of them just like provide data. Some like go out in the real world and try to interview the people that

Ben: Mm-hmm.

Ozzie: they need to. And basically these collectives have an agreement where they get paid in proportion to how correct they are, right? Like how accurate their forecasts are. And yeah, they get money in and then provide estimates out. And maybe in some cases, like some other kinds of information, like reasoning and explanations behind it. So yeah, I would like to see professional infrastructure of different types. I think that this type is going to be pretty cost-effective. One reason why I think it's going to be cost-effective is because that's how hedge funds do it. Like when people in the finance world decide, how do we spend $100 million in order to solve the stock market? they don't set up their own stock market inside that stock market and then give those people tools. Like that's what a tradition market setup would be like. We're like, let's set up our own market of people who are gonna be incentivized not to share information with each other in most cases. That typically isn't what hedge funds do. What they do instead is they have very open infrastructure of people who have to collaborate a lot with each other of very different skill sets to come together. And that... They give them incentives, they make sure that they are very meritocratic. The good hedge funds, there are definitely a lot of shitty hedge funds out there. Yeah, and that seems to be the best that we kind of know how to convert money and cash

Ben: Mm-hmm.

Ozzie: into very good, what are basically estimates of the output of that. So my guess is we want similar infrastructure of highly skilled people with a diverse set of skill sets to be using advanced technology going forward. I would treat it a bit like data scientists, that data science techniques aren't recommended for every man, right? Like they're just like, data science is now established as existing field, and there are people with expertise in the area. Similarly, I'd expect there to be people with a lot of expertise in this area. There's a big question of how many of them we're gonna have. If AI becomes a big part of it, maybe we're not gonna need that many, right? Maybe it's like we only

Ben: Mm-hmm.

Ozzie: need 10 to 100 people in total. to be basically overseeing large AI sets of estimates and helping making sure that goes well. In another world, maybe we just have much more advanced tooling, and that's done by many people around the globe. It is very hard to predict exactly how this is going to play out. I think that for EA, though, if we have maybe $100 million a year in order to encourage the kind of estimate that we want, maybe we spend a third of it on existing style forecasting open platforms. But we'd probably be spending a lot basically on salaries of some specialized people in order to full-time do a good job in hedge fund-like entities.

Ozzie: I could show you the, I mean, we can't do this on video, but seeing the relative value app, I think, would make a lot more sense. I could show

Ozzie: I guess, yeah, I did show it in the video. But in that case, like you'd see that like, it's basically a way, yeah, you store all the information in one place, and a simple type definition, but you're still able to understand the nuance between like areas that are very similar to each other. which I think most people are not gonna get it.

Ben: Yeah, I think I'm feeling pretty complete in a positive way where I feel like I've got a good sense of your worldview, so to speak. I think I already have one. I'm curious, Divia,

Divia: Sorry,

Ben: do you feel, yeah. No, you

Divia: I think

Ben: go.

Divia: there is one point I wanted to dig in. I don't quite know how to ask this, but I remember seeing some thread on Twitter that now I forget who it's from, that was talking about, this is more what we were saying near the beginning of the episode, talking about EA as being a lot about epistemic deference. And I think the person who was writing the thread was somewhat... critical of this paradigm. But I see you as being like, no, that's right. And I would double down. And not that anybody has to or whatever, but that you do think that that's basically a good way for people to think about things and a much more reliable way to think about things than what typically happens. Is that a fair way to characterize your position?

Ozzie: Very generally, yes, but of course it is like a sophisticated nuance discussion. This is a case where like justified trust matters a lot. So if

Divia: Mm-hmm.

Ozzie: you place your trust in a knowledge authority who doesn't actually deserve it, then that is a bad thing, right? Like there are many cases where people, like there are definitely many cases in society where people differ epistemically to the wrong people, and

Divia: Mm-hmm.

Ozzie: it would be better if they didn't do that. Like that's like very obviously true. In the EA case, there is some discussion about epistemic modesty. But some of that discussion is much more, can be more specifically fine tuned by like what specific people to people feel like should be trusted. I think generally the position, when it comes to prediction markets, there are some like more concrete questions we could get at. So one of them would be like, when should you trust the answers that prediction markets give versus the answers that different intellectuals on Twitter or things give?

Divia: Right.

Ozzie: I think in the vast majority of cases, the prediction markets are better.

Divia: We, by the way, are a prediction market loving podcast. As

Ben: Huge

Divia: we

Ozzie: Okay.

Ben: supporters.

Divia: have discussed in packed episodes, we,

Ben: We're looking for sponsorship, Cal-She, FYI. Hehehe.

Divia: we both bet on prediction markets, at least some.

Ozzie: For the sake of forecasting utopia and estimation utopia, I think it would be great to have estimates of how good each forecasting question was. Some forecasting questions just have very little time on them, so it's pretty easy to outperform them, and other ones, I think, require quite a bit of time. And I think hypothetically in the future, it should be easier to discern which intellectuals are just bullshitters, right?

Divia: Yeah, and we were certainly talking about this a little bit before, but I mean, right now, like, what do you have? I don't know, like, what sort of an algorithm that you either use or would recommend? So for sure, I definitely agree with you that if there is some sort of question where there's a highly traded prediction market, I typically defer to the prediction market. And if for some reason I don't, then I'll usually bet. But there are a lot of questions where there's nothing like that currently, right? I can hope that there will be in the future and I do hope that there will be more of that in the future. But do you have thoughts on how to sort of, I don't know, practice epistemic humility or modesty in a skilled way when there aren't, isn't necessarily like a prediction market or

Ben: Mm.

Divia: a super forecaster with a strong track record who's weighing in?

Ozzie: So the first question there is, how important is this question versus slightly similar questions?

Divia: Mm-hmm.

Ozzie: So what I'm excited about are ways that we could apply infrastructure to this type of problem. So if I provide advice, and I don't have that much great advice that is easy to put into soundbites about how to apply this. You could say, oh, these intellectuals seem better to me, but it's for these long reasons of me trying

Divia: Right,

Ozzie: to track

Divia: right.

Ozzie: how accurate they are over time. And in general, I think that a lot of people are overconfident. My expectation is that as more investigation is done, a lot of people would be overconfident. But I think this is very hard to explain, but it is easier to point to what a better world would look like. I have one post on Lesseron about imagining if intellectuals were judged and evaluated similar to professional sports players.

Divia: Right?

Ozzie: So imagine if each intellectual had their own scorecard page with a list of everything that they've said that has really done poorly over time and the specific things that they do that seem really shitty. Also, like hypothetically evaluations of different committees, like investigating their work and saying like, A, how interesting is it? And B, how like, reasonable is it? There are a lot of people who have very interesting work, but you just shouldn't trust it that much. You should just like instead defer to forecasters where there's a disagreement. But there are definitely some people who are able to outperform the forecasting markets or you could generally trust when you know a lot of questions aren't on the forecasting markets. So I think that if we had a more mature ecosystem. we would have a lot of infrastructure in place to really add transparency in how reasonable these people are. And I think that's like, that would help a lot in these types of decisions. But of course, that is like, you know, they're going to push back against this, a lot of people won't like this, who have authority. So it would be expensive to set up these types of things. It is also a case where, you know, this is adding transparency and arguably, people would feel like they're, yeah, it's a trade off. Intellectuals like being treated as people who, they don't wanna be ranked. Most people don't like public information about all their positives and weaknesses, right? Especially if you're an intellectual, one of the things you're amazing at typically is convincing people about your take.

Ben: Mm.

Ozzie: So if someone else is out there trying to say how good your take is, but you're really good already at convincing people of your take, then that's generally bad for you.

Divia: Yeah, the current crop of intellectuals, though I could, I don't know, when I imagine, when I'm optimistic about this sort of thing, I'm like, okay, but then if this sort of thing did become more common and people cared about it, then maybe the next round of intellectuals would be less about convincing people of their takes

Ben: Mm-hmm.

Divia: and more about having better takes.

Ozzie: Yeah, I think basically the truth comes out. Like right now it's kind of expensive to gain information about how correct intellectuals are, but it is better than it could be, right? Like people do get some information about how good intellectuals are, and intellectuals are kind of correspondingly incentivized to be somewhat accurate, or at least not like super obviously incorrect, but we could do a better and better job. And as we do, I'd expect that intellectuals would be better incentivized, and also like the intellectuals that people start listening to would be like just different people. Right, so I think that if we're okay being pretty transparent and spending resources on this type of thing, we could definitely imagine worlds that would be like, we would have much better epistemic norms. That said, of course, keep in mind there are different interest groups where all of their intellectuals are doing kind of sketchy things, so really will not, like those groups may not enjoy things that make this type of thing more transparent, right? So I'd expect there to be pushback in any of these cases, but there's definitely a there there. there are ways that we could try to imagine of much more epistemically mature civilizations. Step one.

I think I'll just close with saying thank you Ozzie for coming on. I've always enjoyed talking with you and this really feels like it gives me a much better clear understanding of how you think about the world in your work.

Divia: Yeah, thank you. Thank you also, and thank you for the work you're doing on better tools for thought and helping us achieve epistemic maturity as a civilization. Isthere anywhere if people wanna follow you that they should find you?

Ozzie: Yeah, so we go to quantifieduncertainty.org for our website. Over there, there's a newsletter on Substack, the query medley.

Divia: Awesome.

Ozzie: Also on Twitter, but those are the main things.

Ben: Great.

Ozzie: Thank you so much.

Ben: Thank you.

0 Comments
Mutuals
Mutual Understanding Podcast
Seeking to understand the world views of our mutuals.
Listen on
Substack App
RSS Feed
Appears in episode
Ben Goldhaber