Mutuals
Mutual Understanding Podcast
Sarah Constantin
0:00
-2:02:39

Sarah Constantin

Thinking about tech progress and physical systems

Sarah is a director at Nanotronics and writes on Twitter and on Substack.

Timestamps

[00:01:00] Why AI is probably a good thing

[00:08:00] The limits of current robotics

[00:13:00] Nanotronics and process improvements with AI

[00:23:00] Predictions on AI

[00:26:00] Input output limitations on AI models

[00:35:00] Drug discovery

[00:45:00] Instrumental convergence

[01:05:00] Progress studies

[1:13:00] Morality

[01:27:30] Game Theory and Social Norms

[01:41:00] Shrimp Welfare

[01:48:00] Longevity

Show Notes

AlphaDev discovers faster sorting algorithm

It Looks Like You’re Trying to Take Over the World

EA has a lying problem

Goals (and not having them)

Sarcopenia Experimental Treatments

Reality Has a Surprising Amount of Detail

Transcript

This transcript was machine generated and contains errors.

Divia Eden: [00:00:00] Okay, today we have Sarah Constantine on our podcast. Sarah's someone that I met over a decade ago at an Overcoming Bias meetup in New York. She still lives in New York with her husband and two kids. She works at Nanotronics and somehow she's never been on a podcast before, which to me seems like an oversight on the part of all these other people making podcasts.

So we're excited to talk to her today. Thanks for coming on, Sarah. Thank you.

Sarah Constantin: Thanks, Divia.

Divia Eden: Yeah. You talk about a lot of things online and maybe one place to start is you've written recently about ai and that's we tend to ask most of our guests about ai.

It's a top discourse subject, and there's been a lot coming out recently. Are you interested in outlining some of your views there?

Sarah Constantin: Yeah, sure. So I am pretty much on the AI is mostly a good thing. And [00:01:00] mostly that comes down to being pretty near term about how I think about it.

Looking at what kind of models do we have and where do they, what do they seem to be capable of and what kinds of improvements are we actually seeing? And in some ways that's very dramatic obviously, and and very publicly so this year or last year. But then there are other things that are maybe a little subtler that are not being touched that I think would be necessary for the whole killis all thing.

Divia Eden: Yeah. Can you say what the subtler things would be?

Sarah Constantin: Yeah. I'm thinking of what you might call agency the whole reason why you would worry about an AI in a way you wouldn't worry about any other dangerous piece of technology. Any other computer program or, a large machine or something is that if you try to turn it off, it might resist you.

If you try to stop it, it might resist you. It, it might have an intention, because you have a big heavy machine, it could also kill you [00:02:00] and it could, and and it might also be very hard to stop in a certain sense if it requires a lot of force to stop or whatever.

But. There isn't a mind there trying to resist you. And the worry is always that would that would be true with our intelligence. And that's what I think has, really not been done. And what I think there's also is also still fairly missing is a lot of real world problem solving.

If you try to do any kind of very simple practical world thing, fix a machine you'll find unenumerated difficulties that weren't in the manual, that weren't, this is the real world versus the digital world distortion involves involves experimentation of feedback. I think the way humans do it is not that we come up with such a great model beforehand and then execute on it but that there's always a world that we.

Bump into and when the model doesn't work, we don't just revise [00:03:00] a one variable in the model and say, okay, bump that number up. But we might have to throw, we might have to expand the whole thing. We might have to think about the whole thing differently. There's a flexibility that is allowed particularly by empirical experimentation, which I think is hard to do in a game world.

Hard to do if you don't have contact with stuff. Maybe even hard to do if you're not a, if you're not a I wanna say a mortal being maybe it's not quite that, but like interesting that ultimately if you get things. Ultimately we're trained by evolution, right?

You if you, if, and we have evolved to have these, things in our heads that can predict what happens before we do it. Because instead of having evolution trained our behaviors, we can do some of our training up here ourselves and have flexible behaviors, right? Okay, if you're too wrong up here, you will die and your wrongness will be a little selected out of the gene pool.

If you goodheart some [00:04:00] metric that actually isn't correlated with survival. There is a mechanism by which that goes away and that's lo and with a, in, in, in the world of a computer bottle the mechanism by which the, a good hearted. Theory, a like wrong theory and a wrong reward model that doesn't correspond to reality.

The way that goes away is that like some human user says, I don't like this very much. I'm going to scrap this and change it. But there isn't but that doesn't happen in training. That doesn't happen in your model architecture. That is out of model problems.

Ben Goldhaber: Yeah. On that point, is it something like you, you don't mean necessarily that like in training, let's say in an RL in environment or something like that that there's like certain behaviors aren't selected, but rather like when you get out of that distribution, when you're like in the real world, the AI doesn't have some way to actually select away behaviors.

Sarah Constantin: Yeah. Even you can get unintended consequences. In rl they had that DeepMind had that whole [00:05:00] selected whole paper of selected examples of it was trained to do something and it cheats to win.

Divia Eden: Like with the thing that's supposedly there's like a hand that's trying to grab a claw that's trying to grab something and then it's instead trying to make it look to the human observer.

Like it's grabbing it without grabbing it, I think was one of these. Yeah. Stuff like that.

Sarah Constantin: Yeah. It's one of those things where you know, I It happens. I'm not gonna I'm actually uncertain. What is, how frequent is this really with RL systems? Is this, did they cherrypick this or does this happen all the time?

But even with nothing special RL systems that exist, clearly it's possible to get unintended consequences does what you said not what you meant. And yet if we're talking about really persistent in the world damage, not accidental damage, but like it meant to do it and it'll keep trying to do it even if you try to stop it, that, that tends to, I think, require coping with unanticipated problems and stuff that I would expect that stuff that I would accept, expect the peer simulation [00:06:00] without a bunch of Experience experimenting the way we do. We ha we get years and years experimenting with the real world is is probably going to limit this.

This is basically saying where my priors come from. This is not an empiric case or whatever. But this is like, why does it not seem realistic that the AI is going to solve hard nanotechnology and then make gravy with the takes over planet Earth. Solve Nanotechnology is a science problem.

It's a bunch of experiments. You have to run them and things will go wrong and, maybe you can figure out a better way to run them faster than people do. But th this is, it's the, it's a domain where you're trying to be, to take the world to a place that's never been before. And you are doing things that are, very messy whenever in the real world.

If anyone tries to do them. So this is this is where it starts to seem no, we're not we're not headed in that direction. We're headed towards things like we're headed towards some very interesting and destabilizing stuff we're headed towards. Apparently now you can come up with algorithms that have been included in the, in, [00:07:00] in the next version of the C language because they're more efficient that we're -

Divia Eden: Yeah. Compiler optimization. This came from DeepMind, right?

Sarah Constantin: Yeah. So like stuff that's out. Definitely not what the past 50 years of AI have been capable of. This is like very new. We're not used to automating this stuff.

Divia Eden: Okay so can I try to summarize maybe some of the things you said?

Yeah. Okay. So one thing you're saying that I think probably a lot of people broadly agree on is that the current systems aren't very agent, right? Yeah. And then another thing you're saying is that you're skeptical that the current architecture or something like you're skeptical that agency in a real way is coming anytime soon.

Yeah. I think because there's some way that you, that evolution has put this drive in us for survival, but then also that ends up in us experimenting and trying things and getting feedback and you don't really think we're on track to having that sort of agency type drive. In the AI for now?

Sarah Constantin: Yeah, I think so. I think one [00:08:00] thing is like if you wanted to do something that, that looked like that, that looked like that you might have something that has a robot and is experimenting or whatever and that's something that's very expensive. And it's you could break the robot, you could do damage to something they don't like, have a lot of training time on a robot.

They like to do as much assimilation as they can. I am not seeing the kinds of investment in that kind of training anywhere. That would correlate with seeing the products that come out of it come out come in a couple years.

Divia Eden: And you also think it's not gonna sneak up on us, that type of behavior because you imagine it'll be time consuming and expensive to create it.

Is that roughly right?

Sarah Constantin: I think it's yeah and probably not necessary. Not like on the. Economic optimal path towards reaping the gains from our current era of AI progress.

Ben Goldhaber: Interesting. Could you say a bit more on that? Cuz my immediate, like my prior belief would be like, wow, robotics is this field that would be so economically rewarded for someone who could [00:09:00] unlock it, but plausibly that's, it's like too difficult or it's not like in the current bubble.

Sarah Constantin: So one, a lot of things, one things that a lot of people don't know, maybe you're familiar but maybe your audience isn't about robotics, is yes, robotics uses computers but no robotics does not teach the robots how to move machine learning style from scratch. They use a lot of machine learning on the perception side to be like, here's a camera image, now let's break it down into objects.

Or other kinds of sensor modalities. And then they path plan their way through the world using algorithms. That humans come up with Yeah. That humans came up with. Yeah. And there's a recent paper where there's a, where there's a robot that learned to walk from scratch. That's a bit of an update.

It's not something you that, that can't be done, and it's not something that apriori would've thought couldn't be done. But it's something that, I don't see the industrial robotics wellbeing motivated by, in part because their [00:10:00] baseline they have pretty tight cost requirements and industrial robots should be five figures. They sorry, what was the last you said? Five, five figures and not more five figures. To a machine shop user or something like that has one of those, one of those universal pick and place robot arms. And it's a universal thing. You write a program to have it do a variety of tasks.

But there's some tasks which apparently like painting is very hard that that are tricky. So there's some things that are entirely out of reach and until now, until, and maybe changing with some of these new model with these new much more efficient models, but even deep learning is often cost prohibitive and they're using vanilla 1970s cv.

They getting into industrial robotics is tight, tighter than you realize, and mo and more and more old fashioned than you realize. It's not, and Boston Dynamics, those are hard coded robots. They are.

Divia Eden: That's what people always say about the Boston Dynamics.

They have these really cool demos, but then they can't really do all the things that it seems like they might be able to do based on these demos. But I'm not even saying cause they [00:11:00] have -

Sarah Constantin: a I'm not even saying that they're not, that they don't perform like the demos I'm saying. Okay.

They didn't learn to do that. Their movement sequence is hard coded. Yeah. I didn't evolve this.

Divia Eden: So I think what I mean is while they can do what they're doing in the demos, it might not generalize as much as people might think if they're imagining that they learned it.

Sarah Constantin: So yeah, the, what I'm thinking is like there's a huge run.

There's a huge straightaway right now in the world of ai of all the things that it sure look like you ought to be able to automate from here, automate various kinds of text generation, automate various kinds of code generation. Automate image generation. Do all the grunt work in your animation and your video games via AI.

Do I once saw an additional chemist talk about to be very skeptical of AI. Explain how, unimpressive it, is that it did every step in his two week flow workflow automatically and immediately and Anyone could do that. You just look it up in these [00:12:00] databases and you make these comparisons and calculations.

And I'm like but it did it for you, right?

Divia Eden: And it's new and it's, yeah.

Sarah Constantin: To that point, if you're talking about about the biotech world, okay, you can look at a bunch of chemical structures. You can see which ones when you can make computational predictions of various kinds of behavior that are in the data sets.

And you can interpolate between data points. You can you can rule out things that would on computable, heuristics, big bad drugs. Okay, cool. Now you still have to do experiments. We haven't gotten so good at predictions that the experiments are obviated. You have saved two weeks out of the medicinal chemist life, which is a big economic deal.

But when people say, and then the AI will make our medicines for us is like, Not on this planet. Not, no.

Ben Goldhaber: So the, there's this at least one clear choke point that prevents kind of total automation, which is the interface with the real world.

That part is not looking like it's gonna be automated anytime [00:13:00] soon.

Sarah Constantin: Yeah. It gets hard. And that's actually where nanotronics works. So this is not me acting as a spokesman for the company. Yeah. Yeah. Please tell us just, invisible lawyers out there.

I'm speaking for myself. But so Nanotronics is where I work and we started out about 12 years ago and doing inspection making inspection machines for the semiconductor business. So our founder, Matthew, is a physicist and he, while he was an academic came up with some Clever ideas for more for basically doing more with less data computationally microscopic images.

And of course, in semiconductors tiny defects can ruin a wafer, can ruin your device. So they care a lot about extraction. They inspect everything optically, microscopically. Some things are inspected at insanely tiny, nanometer scales. But we can do, but but we can detect defects with image processing, machine learning that typically you need a much more [00:14:00] powerful machine, like a room sized machine to detect when they're going pixel by pixel of is this pixel the wrong color?

If you can detect patterns, you don't need quite so much mach hardware horsepower, you can do more. Interesting. We did this, and then we noticed over time, founders noticed over time that I. What they were doing was not merely saying good, bad on the production line or what the they were noticing.

Where are the defects and what does that say about what things are going wrong in your process and what should you do different next time? So the idea was to close the loop, and that's where our process control stuff came out of. We have, AI process control, and process control is the term of art and manufacturing for when you have a mostly automated process.

Think of like a chemical plant prototypically. Things are flowing in and out of tubes. You've got machines and so on. You want to get the best quality, the best efficiency out of that. And there are a bunch of settings on things and there are a bunch of computer controlled trajectories of, [00:15:00] things are heating up and cooling down and spraying and going across lines and so on.

And. There's a lot of sensors. And there's and what we're doing is taking all the sensors in and we are developing a model of the entire process as it goes and predict predicting its tra trajectory over time. Noticing when things are not as we predict them and saying, Hey, it's getting weird as a warning flag of maybe it's doing something it shouldn't.

We we're predicting KPIs, performance indicators, things like, quality metrics or up uptime or yield, which is the percent of what you make that is good enough to ship. And we're able to say, Things look like they're about to get worse, and we're also able to say, Hey, it looks like maybe you should nudge this in a different direction if you don't want your metrics to go in the wrong direction.

So that's the story of what we're trying to do here. And it is right at that intersection of we have models, but [00:16:00] we're also trying to do something in the world of Adams. And that's good and bad. The good part is nobody else is trying to do it because it's a real schlep.

Because you have weird, noisy data. You get and and because, because there's also less tolerance for mistakes in a situation where if you have the machine run your factory and it runs it wrong. You've got people in there who you can't hurt. It's not like it gives the right the wrong answer in a text box.

The very mundane kind of AI risk of you don't want to make an oopsy whoopy and make something explode. Is something we have to keep keep in mind. But where it gets interesting is that what we see a lot of is while the world is full of people who have great theoretical models, the great physics models of how things are supposed to be, there's a lot of things where no, they really don't.

There's a lot of reactors where things are being heated up and having complex chemistry and turbulence and your model of how it's supposed to be [00:17:00] and what your sensors say is are not the same. And where going. Machine learning style going, we don't have we, we don't we don't know how it's supposed to work.

But give enough data and we'll give you a shape of how it does work. Is pretty fruitful. But yeah.

Divia Eden: And those machine learning models, would you say, I don't know if I can ask this correctly, but do you think they have real insight into mechanistically what's going on? Or just some predictive power heuristically?

Sarah Constantin: So you can it depends on what you mean by they we've been able to pick out not only things got weird, but here's a logical explanation for why. Because you can look into, like sensors have labels, they have human interpretable labels usually. And you can look into, okay, these sensors went hay r or all at once, and that's consistent with this kind of thing going on.

Ben Goldhaber: That hypothesis generation, though, that's happening on the human side. It's not something where there's like models that you expect the [00:18:00] machine learning model to have necessarily, that are generating those explanations.

Sarah Constantin: So for No, it is not at all hard to imagine how you could do something analogous to the mechanistic interpretability stuff that there have been papers about where you can where you can say, okay which sensors are responsible for this? The this this signal going weird. That's easy. Or even try to do some kind of, Statistical proxy for causality. This thing seems to be upstream with these other things. Like you can build in interpretability where the user or the person analyzing the data output has to do less and less over time.

And I think that,

Divia Eden: and do you expect that to happen? Do you expect that to be pretty useful economically?

Sarah Constantin: Yeah, because

the more you see of other people's manufacturing processes, the more you realize things going wrong. Is normal things going wrong? Is normal at household name companies catastrophic? How are we going to deal with this problem? Not to tell, I, I can't tell tales outta school, but like we hear about things, we're like, [00:19:00] oh, this seems it they're famous. We've all, we all depend on their stuff and things are going very wrong. Sometimes things go wrong there.

Ben Goldhaber: Yeah. Yeah. This matches my experience working in large big tech companies where we were always like, wow, everything feels held together by duct tape here. And I guess that must mean there's some fantastical company out there that maybe they're just really far more competent.

And it's like they have it all working and it's not blowing up, but we haven't found them yet.

Sarah Constantin: And it's it's not, I've also met a lot of really competent people. It's not really, oh, someone did some obviously stupid thing. A fair amount of crisis happens.

Ben Goldhaber: And this makes sense for some views on AI, like just, I dunno to make the kinda obvious connection back. If the world is so often out of distribution when they're, so often these things blowing up, it's yeah, AI can't, at least in the current stage is too fragile to just start running wild in the world .

Sarah Constantin: It's very occasionally I see somebody who's not in, in manufacturing world or anything like that be like [00:20:00] why doesn't why doesn't it just work outta the box? Somebody, some Twitter commenter TSMC buys all their machines from from other companies. So why do they have such a high valuation?

Don't they just un unbox the machines and turn them on and then the semiconductor devices come out? No. Great. That does not work.

Ben Goldhaber: There's unfortunately more to it than that for last. Yeah.

Sarah Constantin: They get paid the big bucks because there is a, an art and a science to it.

Divia Eden: Yeah. I keep thinking there's an old post of John Salvatier’s is called Reality Has a Surprising Amount of Detail which keeps coming to mind when you're saying all these things.

And that was about, I think installing Yeah. It was like some carpentry stuff. Yeah. Which I don't know is maybe, probably sim it's gotta be simpler than industrial manufacturing, but even

Sarah Constantin: It has gotta be simpler than industrial manufacturing. And if it's still more complicated than you think if you haven't done it.

Everything's more complicated than you think if you haven't done it as I, find every time I try to do a project at home, I'm like, I will carve a jack-o-lantern for Halloween. That can't possibly go wrong. Yes,

Divia Eden: [00:21:00] definitely. I've had that

Sarah Constantin: happen too.

Ben Goldhaber: It's interesting that you're pointing out the way in which reality is so much more detail than we naively expect in our very simple models, because this is both, it feels like to me, an argument for like why AGI or powerful effects from AI might be

Less likely in the near term than certain proponents believe. At the same time, it also, this is like a common feature of certain arguments from people more in the Ai doom camp around the difficulty of controlling powerful ai, where because reality is so much detail, you can't just specify, do the right thing. I'm curious if that factors into your views on this as well. Is it mostly like you're skeptical that we're like close to ai, but that AI control is still like a very difficult, hard problem or something else?

Sarah Constantin: I'm 100% what that AI control is hard. All of this we'll just get the ais to vote with each other or we'll just ask, get the AI to evaluate its own output and see if it's good or not, or we'll ask the AI to give [00:22:00] us a plan for alignment.

We'll ask chat gpt to give us a plan for alignment. No, you won't.

Divia Eden: That seems hard to you for similar reasons to way all the other things seem hard.

Sarah Constantin: Yeah. It seems even obviously more har even obviously harder. I'm like, I like chat gpt. I use it. I've tried doing Mac with chat gpt, chat gp PT is not good at math.

Every couple of steps, it will give me like an obvious logic fail. Which is not necessarily mean that it's not useful. It can be cool to have a little, conversation partner / rubber ducks. That is, that does make mistakes if you expect that going again. Yeah. But solve a unknown to human's problem with no.

I'm not one of these people who thinks that, machines inherently can't reason or something like that. I'm not thorough. I'm probably, I probably lean towards anything that that that an organism's brain can do. You couldn't principle run on silica.

Divia Eden: I think it's the functionalist position.

Is pretty common among people we talk to.

Yeah. So I guess two, two [00:23:00] questions I have. One would be, what are some things that you think do you wanna make any predictions about what we will see in AI in the next few years? And then another one is something like, is there a point where if you saw that, you'd be like, okay, now I'm worried.

Sarah Constantin: So I haven't really come up with a so I have a, I wrote a piece for Asterisk Manage that's coming out, the next one on ai. Okay. I tried to look into hardware stuff. And memory wall stuff and Moore's law stuff. So we're looking at 2027 is one consensus, end of Moore's law.

Can't actually make the can't actually make transistors smaller. And then there's a lot of work being done to evolve in other directions that will keep improving computer performance in other directions. That has to do with stacking things higher. If you can't, if you can't make 'em denser, you can put more layers.

Or things that have to do with going away [00:24:00] from the transistor. From the standard transistor architecture altogether, and do you know, other kinds of elements? And there's obviously purely algorithmic improvements and efficiency so you don't so that you can get the same kind of performance out better algorithms or better or, special purpose hardware that's designed to do exactly this kind of AI or whatever.

But it does look like there may be a world where models I think probably my most contrarian view is I'm not sure how much bigger state-of-the-art AI models are gonna get. Like I, I wouldn't think it'd be crazy if GPT four was the peak. In terms

Divia Eden: of, oh, interesting. Even though Moores law has a couple more years, you think this might be it,

Sarah Constantin: For memory related reasons.

Okay. Memory related memory input output bounds are are seem to be at a more binding constraint than the [00:25:00] computational balances. And subjectively it doesn't. There, there's some ways in which GPT four is better than GPT three, but my experience using n this is, take it with a grain of salt, was that GPT two was really interesting because it could do grammatical English.

GPT three was really was qualitatively a whole new thing because it could be a check bot, it could actually do what, let's say, clip you the paperclip in Microsoft Word gave you the impression it should be able to do, but actually didn't. And then GPD four is a better chat bot, but it's also, but you would use it for about the same thing.

Divia Eden: You think it's hit diminishing returns basically? I think

Sarah Constantin: it might, I think there may be diminishing returns on scale, increasing costs to building things bigger. Definitely. The right the price per transistor has been stagnant for a decade. And also a lot of exciting stuff that comes from getting a smaller model that does as well as a bigger one.

With targeted data. If you the whole LoRa thing is [00:26:00] basically I, what's the Laura thing? Is gpt two size are smaller models that are trained on a data set of chatbots talking to people doing okay. Relatively GT three level performance. And at over source there were a bunch of leak weights that came out of meta.

And then a bunch. Oh, I have heard of that. And that's fine. Tuned chat bots for different applications that the whole open source community came up with that can be trained, very can be trained or run inference super cheap. Things that could be run on a laptop not a laptop, but a a good game or computer.

Ben Goldhaber: You know what, you can actually get it running on an M one or an M two laptop now as well. Oh, yeah.

On the memory input output side, I didn't quite follow where that becomes a constraint.

Is that mostly on the training side of things? . It'll make it too expensive or

Sarah Constantin: too slow for, if you're trying to make a model as big as state-of-the-art models that takes. Three months just worth of memory, inputs and outputs. How, okay, you [00:27:00] wanna make the next iteration and you want it to be as big how long are you gonna wait between releases?

How much time is this? If makes economic sense. And if you're thinking I have to release an upgrade in a timeframe with a deadline then memory time becomes a bottleneck. In a in a tighter way. Then compute state of the OCC compute

Divia Eden: and you don't see that changing anytime soon.

Sarah Constantin: It's hard to say, because you can use less me, you, you can use some algorithmic things to make to make fewer calls to memory. So I don't know for sure. But It might, it seemed it seemed to me to be a constraint that like most of the world was sleeping on. I see.

And I do, no, I do notice that right after GPT four, they started saying, let's have a pause. Let's, not tra start training GP five for a while.

Divia Eden: There's, and you take us somewhat, I don't know if cynical is the right word, but like that you think probably this is because it didn't make a lot of economic sense to do anyway.

Sarah Constantin: Yeah. [00:28:00] And right now there's, we barely scratched the service in terms of applications of an lol of what All of trial, all the things you could do with the chat. Bott, try doing private chat bots with a company's own dataset. And exploit for a bit or One comment on the whole chat e PT thing was that it was a oh, no, they'll de they'll never be able to turn a product on this because it's a general purpose tool that you have to like, figure out how to use for your particular application and figure, figure out how to prompt.

No, that's wonderful. That's, that's what the PC was. It's a general purpose tool. Finding out fi finding out economically productive applications of a general purpose tool is what we do. Yeah, but I don't think we found a ball yet. I think there's a whole world of fiddling with all alums to do, and,

It may be that we're like, this is a separate thing from the AI might have trouble kicking of the world.

Is that I'm not sure that I'm not sure they're gonna get as, get bigger as much. Now that doesn't mean they can't keep getting better. With algorithmic progress, presumably. Yeah. [00:29:00] Algorithmic, pro progress and data progress. If you're willing to get exactly the da train on exactly the kind of data you want that seems to make a difference.

That is that, that compares favorably to having more data. Right? And if some so nicely curated data sets can make models that are better, a bit smaller and less hardware intensive. And actually it's if you, depending on how much you worry about this makes things more alarming.

But so this is in a way orthogonal too, how alarm should we be about progress here? It's just is it gonna come through scale? And I deep Mind has taken a trajectory that's very much not dependent on scale alone, trying lots of different directions. And open AI has taken a more scale as can approach.

I haven't been tracking ROIC as much. And, but my, if I were gonna, if I were gonna bet I was, I would. That's like the next best thing to to not come from there. Yeah. I've been, I'm pretty sure that I, they would say what, what's running under the [00:30:00] hood at Mid Journey.

I'm a big mid, mid Journey fan, and given how it responds to slight differences in, in, in phrasing and so on, that's gotta be like a G P T Q at less Tech textile. It's got to be small. Ben uses Mid Journey.

Ben Goldhaber: I don't know what they're doing under the hood though.

That would be a good question. I think given their history as an organization, they're not investing in bigger and better style models, which is my guess. Like I don't think they've raised a bunch of outside funding in order to do that. Yeah.

Sarah Constantin: The Bootstrap, which is always something I root for,

Ben Goldhaber: right?

Yeah, exactly.

Sarah Constantin: But yeah, no, I think, they seem to be a pretty good example of you can do a lot with a little and yeah. And good taste.

Divia Eden: Okay. So to, to ask you about one area that you know, a decent amount about, do you have any predictions for what the current AI technology is gonna bring in bio?

Sarah Constantin: Very course level alpha Field has changed the world in terms of protein structure and binding. [00:31:00] There's a lot of very specific to structural biology details that I'm not even that familiar with. But yeah. People use it. Yeah. And I just,

Divia Eden: I read, okay, this solved the protein folding problems sounds awesome, but I, as a lay person, I don't really know what that means in terms of practical applications.

So do you can.

Sarah Constantin: Most drugs are designed most drugs to the extent we know how they work which is less than people like to believe. The idea is that it it is that a drug binds to a protein and at its binding site and interferes with it function somewhat. This is not everything, but this is a, this is a model for how things work, a tyrosine kinase inhibitor inhibits tyrosine kinase by binding to it, by binding, interfering with it.

And so if you can predict how a protein folds you can predict its shape from, its from a genetic sequence. If you can predict if you know a protein's shape by, and you can get this, and you can get this by. [00:32:00] Experimentation by crystallography, that sort of thing. If you know a protein shape and you have a diagram of molecule you, and you know what, the chemical composition of everything the the world of computational biochemistry would like to say how does it bind who, where, and does that cause a confirmational change? And do we expect do we expect it to do a thing that we care about? So the whole world of of drug discovery and trying to push more of that into the computational domain is saying, can we can we predict what's gonna happen with our molecule and its target molecule in the, and this is This is that's for small molecules in the world of I don't know, a vaccine where the vaccine, you pick something like for the vaccine it's it is a bit of mRNA which is met to create an immune response.

You can say, okay, how are[00:33:00] okay, that's the, I'm gonna take that back. I believe that's not a great example. But for for something like antibody binding, will an antibody bind to this thing? Be it a be it a segment of nucleic acids or a protein fragment or something like that.

Do you expect this thing to have an antibody response? Be it allergic or, protective or something like that. All of these kinds of molecular questions, if you could solve them computationally, you can maybe narrow down the search space. Where I think people get too optimistic is if you're not good enough to replace an experiment.

All of the costs are, have you, and all of the risks are, have you. Most. And is

Divia Eden: alpha folds good enough to replace a lot of

Sarah Constantin: experiments? No, not nobody's gonna try. No. Okay. And if you look at the trajectory of of creating a drug creating medical treatment from inception all the way to clinical trials and approval the the cost is higher at the end and at the end with the human trials.

And right.[00:34:00] The risk of failure of having to stop is still enormously high and highest of the last stage when you're trying to prove

Divia Eden: efficacy. Okay. So you're saying it definitely can't replace human trails. Could it replace early stage in vitro experiments?

Sarah Constantin: I think they're really not doing that.

I think they really, okay. It's not at the point where you would even. Skip the, here's a molecule in the in, in in solution. Here's my target, here's a protein and here's my target. And here, and here's my drug candidate. And how do they interact? You still have to try that and see how they do it.

Divia Eden: Okay? So if anything, it makes that more likely to work, right?

Sarah Constantin: It mean, it means that you're probably you may take, you may find something that's a good lead that has a higher chance of working when you do put it in an experiment faster. Okay. It means that it, it means that there are like whole classes of things where, our search isn't we don't, we think about searching through all the world, but that's not really how it works.

The, [00:35:00] what happens is there's a couple kinds of proteins in the body that are considered druggable. We know where to look, and then there's a lot of things that are like, we don't know where to look. We don't know how to affect them. I don't know. Stat one is every is a great super upstream.

Sorry, what's that? It's a gene that regulates cancer. Stat one is upstream of every of everything. Okay. And maybe, and it's often mutated cancers. Maybe you can fuck with it and then prevent cancer or treat cancer or something like that. How do you get to it? We don't that's been one of those undruggable proteins.

There are a lot of like holy grails out there where we don't know where to modify 'em. And maybe computational chemistry through alpha fold or things like that can say okay, you, you don't know any viable paths here. Here are some viable paths that are worth running the experiment on. Okay, so that's so that's valuable.

But it's not but then you see but then the idea that it's like, That there's, that, there's that the expensive steps are skippable just not happening.

Ben Goldhaber: It almost strikes me as you [00:36:00] need to take a step back and have these new powerful AI systems building better models of the world first, and then do something with them in order to unblock those parts.

If you had really good in silico in simulation models of parts of the body or something appropriate for biology, that would be a way of unblocking the experimentation cycle

Sarah Constantin: maybe. On that, that's like the same kind of path. If we had.

If we had human understood science models of those parts of the body, we would also be better. Like this is the path to the AI understanding. It is very close and overlapping with the path to us understanding it.

Divia Eden: We'd have to do a lot of the same. We don't have a data set for this, we don't have data set.

Clearly, we don't.

Sarah Constantin: Yeah.

Divia Eden: The, and so you're saying it sounds expensive and difficult and not something where you think there's gonna be a big advance anytime soon, necessarily.

Sarah Constantin: Yeah. It's not, yeah. It's yeah, it's just[00:37:00] I think that, I think it can be helpful in the domain of let's try a lot of things in parallel.

I think it can be I think it can probably be helpful with the world of automating, like

what is. So the whole the whole like success of gene sequencing has been let's take something, let's do it. Let's take something that that we wanna do a shit ton of and let's parallelize it. Let's miniaturize it, let's mechanize it. And let's be able to look at the genetics code, if anything we want.

And, with these last couple of years, we can do this one cell at a time. We can do single cell we can we can do many times of RNA sequencing. We are expanding the range of what can be automated using, you're using MacBook and making better data sets. And there's a use for AI in the automation itself of while you're doing this, can you make it can you make it more reliable?

Like the whole. This is like what you're

Divia Eden: talking about, like the, on the [00:38:00] industrial machine, there's an side automat of things with sensors and

Sarah Constantin: stuff. Correct. And then once you have big data sets of course, and you don't really need the very latest advances in ai, but, statistical analysis for large, high dimensional data has always been a fact.

And and we're getting better at it and it'll get better. Big big mysterious data sets full of DNA and stuff. Now this is the world of bioinformatics. So AI goes there. AI goes on the automation side. While we're trying to get, I, there's a then physiology as people, physiology involves con it, it involves coming up with conceptual categories what is a glial cell?

Somebody looked at cells and they had different shapes and they said, this is a glial cell.

Divia Eden: And they, so when you say people, you mean this is something that currently we don't know how to automate?

Sarah Constantin: Yeah. This is okay, you can draw clusters. You can say, let's do K means clustering on this.

Do I care? Is that, does that have explanatory power? Is that a natural category? Is [00:39:00] that the category I wanna use to then target one of those things?

Ben Goldhaber: It's a na natural category in particular strikes me as really important here. Something like, you could ask chat g p t to make up categories for some dataset, but it's does it actually cleave reality of the joints?

Sarah Constantin: Do I wanna bet on this? For the purpose I'm going after. Do I wanna, do I do I want to target one of these cell types in order to modify this disease? Or is that the wrong did I divide 'em up in the wrong way? No, Jack GT's not gonna do that. Like I gave it, I, I ask it a question why can't you just use why can't you just use X-rays to get a smaller feature size in semiconductors?

Cuz experts have a short wavelength. And it gave me a list of answers among them was a good answer. It was a correct answer that I confirmed elsewhere. But then I don't know, four out of the seven answers were just [00:40:00] because it would require developing new technology in one way or another that would cost that would be expensive.

And it's that's not, that doesn't, that's not really an answer. That's the kind of non-answer a person would totally give you. But it's not a it's not a failure in the goal of what a human might say which is what I was trained to do, but it's my goal. And like it's not, yeah no.

The just because it can I'm not sure you couldn't train under the right circumstances if you could define the kind of thing you were trying to get and the thing you were trying to avoid. You could train something, I don't know. But certainly not for gym Jen lms, like how would it know?

You don't know you, the right, you, the scientists don't know. You've got the cell types, right? They change them. You're trying to use your best judgment and you're, and you are keeping your goal in mind.

Ben Goldhaber: And the goal in mind, part I that reminds me of a post that you wrote recently on - you [00:41:00] compared it to the characters in succession about their goals versus wants and agency.

Yeah. And it certainly seemed informed also by some of your thinking around ai. Is it fair to say something about like there being an important thing that both Kendall in succession and, okay.

Divia Eden: I have only just seen episode two of succession and I plan to watch it no spoilers.

Ben Goldhaber: spoilers please. I was just gonna put in aside there to point out that I've watched all the way up to season two, while I totally missed all of the time anyways the something like these characters and like the G P T version of AI don't have stable, coherent goals and some kind of map of them that's gonna let them get to.

Like keep it in mind while they're trying all these other things. Yeah. Yeah.

Sarah Constantin: There's a, there's, it's really funny because I think it's a, I think it's a really good show. But I think they are getting it at certain kinds of psychological realism that are very unflattering, but that I recognize, I believe people do this is I've sometimes been these people in a less dramatic way.

But there is something where're like, oh yeah, why are you doing that? Oh, [00:42:00] you're not trying to get somewhere. There's not a place you wanna get. You're reacting of the ways, you are used to reacting to the situation. And that is literally all

Divia Eden: yeah. A lot of human behavior I think is not well described as super goal directed.

That seems totally true. Yeah.

Ben Goldhaber: There's a few different experiments with trying to have more agen versions of G P T, where you have split it up into different modules where you have like the planning module and the goal module version.

I suspect that won't work and it doesn't really get to the thing that we're talking about. Yeah. But do you put more like confidence in that kind of, almost like pseudo symbolic kind of module based approach? Yeah. Or is it still like

Sarah Constantin: trying the wrong thing? Yeah. It doesn't seem like it's dealing with that at all.

It seems there's one sense in which people say, oh, I made it agen. Because it is not no longer boxed. I connected it to the internet. I changed its inputs and outputs. I put a little loop there. So instead of only responding to my prompt, I let it [00:43:00] generate its own things. But that's that's very superficial.

That's about what are the things I am allowing or attaching? These are the end factors. If it was a, if it was a robot it is a thing that does text prediction and generates text. Okay, now what kind of wrapper do I put around it? Do I let it generate text that is curl this?

Do I let it and then download a file and scrape text off the internet? Do I do I gener, do I allow it to generate the text, run myself? Do I, those are a bunch of little, imagine if I had a little things on each of my fingers that could do a different thing.

I'm gonna give it a couple extra things on its fingers. Okay, cool. And then it's out of going to allow it to persistently, flexibly pursue the same thing across context. It's just gonna allow it to do things that, that the UI doesn't let you do in literal chat gpt, but if you like trivial, modify it, you can make it.

So

Divia Eden: the, this makes it like [00:44:00] people talk about instrumental convergence in AI

Sarah Constantin: systems, right?

Divia Eden: Yeah. And I think what I'm hearing you say on the instrumental convergence point is like, okay, yeah, in principle, maybe a powerful enough model would achieve something like that, but we're not close. You don't think it's economically valuable and you think scale isn't automatically gonna get anything there.

Does that seem roughly right?

Sarah Constantin: I'm sorry, but define for me again what you mean by instrumental convergence.

Divia Eden: Yeah. I think the idea is that it was sufficiently powerful enough system will be over determined that it will develop goals, stay alive and get resources and stuff like that.

Sarah Constantin: Okay, sufficiently there's where I'm gonna be like, okay. I guess sufficiently helpful, right? Sure. It's certainly good for anything. It is like logically a sub goal of anything. You could say that it will work better if you are alive to do it. Yes. But and maybe but that doesn't mean that anything you plug in with that goal, like maybe sufficiently [00:45:00] powerful like that that they're Yeah, that doesn't I think I agree with you.

Yeah. That's where the disconnect

Divia Eden: is for you. It doesn't seem, and I don't know that I'm fully doing justice to the instrumental convergence piece of this, Ben, feel free to.

Ben Goldhaber: No I think that's right. Something like, there are certain strategies that should show up in very different types of systems, I think seems right.

Like something like, yeah. If it's almost always useful to get like power in order to like better accomplish your goals is something that people posit for. Like why AI would go a power seeking direction

ok.

Sarah Constantin: And yeah, I think that seems logical. I think I think given something that has certain capacities there is, there's probably going to be. If you think about people who have, people who are very determined to seek goals will usually along the way, need to make some money in in a monetary society.

People they will generally need to have some allies some people who do what they want. Like the, that kind of logic I think is valid. Yeah. [00:46:00] But it's just it's something that I think might actually be quite hard to get machines that have the necessary capabilities that these, that this thing would fall out.

Divia Eden: And I think I already asked you this, but I'm gonna try asking one more time. Is there anything that if you saw that sort of behavior from AI systems, you'd be like, okay, this is more the thing that's on track to be more alarming to you personally?

Sarah Constantin: So I don't have a great on off test.

I think so you ever read Gwern’s short story about it looks like you're trying to take over the world. I didn't. I can like that.

Ben Goldhaber: And I'll include a link in the show notes. Yeah. So

Sarah Constantin: I really like this because Gwern is a great example of somebody who takes the X risk seriously and is also very grounded in the details of the field.

And so his story starts with a bunch of things that, a bunch of things happening that are not too hard to imagine happening with a with some powerful metal [00:47:00] learning model today. And then at some point, like it's I think I think There's there's a very clear point where it develops an idea that has an insightment outside.

It knows that it is a it knows that it is a compu that, that it is a computer program running on a computer, and that won't, it will need more computer if it wants to survive. And he doesn't pay that much attention to that step. But that's the, that, that's the magic step for me. And then all the things that happen afterwards are things that if you imagine something that knows it needs compute to do its thing and that it is a computer program and that it lives on a substrate yeah. Then subsequent to that is something that like, would be very natural to do, but how do you get to that point where you know you want to do it?

Like you can imagine? I think some of I think after the quote unquote self-awareness point, the i, the intermediate steps in that story many of them would be concerning. I can imagine worlds in which I. If there were certain like exploits that accidentally got a [00:48:00] machine, more computer, more, more compute they wouldn't necessarily mean that it did them, quote unquote, on purpose.

It could be an evolutionary process of you're running a bunch of things and then one of them did this and it got better results or something like that. But yeah, anything that is taking up more compute than you expect and it's not traceable to a human bug is something. It's something that I would consider a little bit of an alarm because yeah, I think I buy the instrumental convergence story of I buy the instrumental convergence story of the very, one of the very first things that almost any program that had this property would have is it would like computing power because it is a, yeah.

So something that seems to be if you could if you're pretty much sure that it is not like it, like somebody introduced and those could be hard to find, but like it's systematically grabbing at resources and stuff beyond what you thought was that it's in, in the sandbox it was supposed to be in.

Divia Eden: If it has some [00:49:00] sort of self-awareness, situational awareness and some sort of stable drive to do things, then that's a different category

Sarah Constantin: for you. There's a it's interesting that there's an overlap between stuff that MIRI has was talking about at the end and stuff that people who hate MIRI guts talk about that have to do with awareness with a model that there is a world in which one self is a part of which one self is a.

This is something that, like Benji Stein was talking about, and this is the stuff that, like the embedded computing people and the, and Venkat RA and all of these AI skeptic takes are coming from, and they're both talking about the same thing. They're talking about the fact that you can't do, you cannot be a scientist.

You cannot. You cannot derive things like maybe I should do experiments to the world unless you believe that the world is made out of matter and you're made out of matter. And by arranging the matter in various ways such that it changes you[00:50:00] you can learn something like you couldn't have made a computer to think for you.

You didn't already know that stuff can think or that there's a relationship between stuff and thinking. Yeah. And that you are stuff. And more seriously, you can't, if an, if you're not going to realize you don't want an anil to drop on your head. And in particular that you don't want to be mistaken about the fact about whether or not Anil is going to drop on your head.

You're not going to have the don't. I think people, you may have even said this Divya a long time ago, that there's something about people's motivational wiring where they don't. About some things they don't wanna be fooled about. Some things people, yeah, people are free to self deceive. But there is a don't fool me.

I am going to try to get out of these good heart traps. I'm not going to remain motivated by things that are too that, that are not the thing that I'm intended to that is actually good for me. That kind of thing. Have you said something? Yeah, no,

Divia Eden: I think that's right. I think there's some parts of human motivation that tend to be more grounded in that way.

[00:51:00] Yeah.

Sarah Constantin: And I think the intent to the in the intent to keep trying to correct your model so you don't get fooled on something. If my perception is not reality, let me change my perception, depends on thinking that there is something outside where, what my current model is. But there's a, that there's a world and I'm in the world

Divia Eden: that there's some sort of map territory distinction.

Yeah.

Sarah Constantin: The world is bigger than me. The world is bigger than my map. That kind of thing. Again, one once again, like you've got you've you, you've got very spiritual, mystical people trying to say this, and you've got very right people trying to say this. And it's really the same thing of there's got to be an outside you.

You have, you can make a beautiful bubble, you can make a beautiful picture and you make it as detailed as you can, but you do have to leave a spot for when something happens that's not in my picture, I will change my picture. Yeah. And

I don't, yeah. I, and I think that's also what you what you need to take over the world.

[00:52:00]

Ben Goldhaber: You have a famous essay called EA has a Lying Problem. You have many favorite famous essays. I thought this one was one that sparked a particularly interesting discussion around ea, about intellectual honesty and epistemic. I think this was from five years ago, six

Sarah Constantin: years ago now. Yeah. And now I have mixed feelings.

That whole thing went down. I had some friends who were really pushing this yeah I think I can name Ben Hoffman that we were we'd been talking a lot and he really wanted me to use my platform to get some attention to some of his concerns. And there was so much drama back then about that

Divia Eden: there really, there's so many

Sarah Constantin: comments and, yeah.

And. What do I really think? I mean I tried to make it in good faith, partly it was me and partly was me vent, vent opinions other people have.

Divia Eden: I'll say something I respect about how that all played out at the time was, and I could be remembering it slightly wrong, but my memory [00:53:00] is that at some point someone was okay, but Sarah, you're now like using, you're doing a little bit of the thing that EA is complaining about, and you were yeah, in this post that's true.

I don't know, you, you seemed pretty intellectually humble about it when you presented it. At least that was my impression, which I respected.

Sarah Constantin: Originally that, that whole thing was about a couple complaints. One was one had been there was a scandal about there were a couple scandals about effectiveness and estimates on animal charities where people were like, it's ridiculously easy.

There was a whole pamphlet thing, right? Yeah. Pamphlet makes it super, super easy to go vegan. And that didn't really hold up. And There have been investigations about like the cost effectiveness. I don't think I discussed those in the post cuz I didn't totally understand it, but how much the life saves per dollar.

GiveWell estimates originally they had, right then they were like, for entertainment purposes only, this is not a real thing. And then they kept going getting lower and like the the bang for your buck.[00:54:00] That the charit at charitable effectiveness and especially Peter Singer's original statements about how much lifesaving debt per dollar started at over optimistic and has been going down.

And sometimes they walk them back and sometimes they, they use the original stuff for branding purposes and right, there's there's a question about if you're trying to get people into the movement on the basis of charity is so cost effective when you do it right And it's not actually that cost effective.

At what point are people have people been gotten in on the basis of something that's not true? And just

Divia Eden: to say I haven't looked at the latest cost effective estimates. Is it what, like 15,000 a life or something like that? Right

Sarah Constantin: now? I don't remember. I, something like that.

Yeah, it's still, it's, it's it's still maybe worth doing, but it used to be way lower. It used to be like five. It used to claim that it was way lower anyway, thousand even. I think I may have seen yeah. Yeah. I remember like

Ben Goldhaber: 3000,

Sarah Constantin: but I'm not, I'm getting that from

Divia Eden: Yeah. Certainly the original thought [00:55:00] experiment with the expensive suit, it wouldn't, it would've been in that range.

Sarah Constantin: Save a drowning child back your expensive suit. Yeah. And It looks like you can't really say the Browning child by for the price of, most people's Id an expensive suit. And then there had been a whole thing about should people be pushed to take the giving pledge.

Because

Divia Eden: and that's where people pledge 10% of their income for, I forget it was a long time or indefinitely or something like that.

Ben Goldhaber: As long as you're working, I

Sarah Constantin: think. Okay. And then there had been talk about from Rob Woodland about, we never expected anyone to take it seriously.

So seriously that they would actually spend more of their money than was good for them. And that says something either bad about you or your audience. Cuz sometimes people take promises seriously.

Divia Eden: And certainly it was sometimes true that people did take it seriously. Yeah. I think there was some people, there was some discourse about are they, are we talking about just a few cherry picked examples of people that stuck with it, even though it was bad for them, but there certainly were at least some people that did.

Yeah.

Ben Goldhaber: Yeah. Absolutely. I wanna just note, [00:56:00] hey, have a similar apprehension to the pledge and I've been trying to convince a few people I would really like to see it, given what we can pledge Jubilee Day where everybody who regrets taking the promise is like honorbound of getting outta it. It's no longer honorbound.

Maybe you can re-up if you'd like, but it's all debts are forgiven.

Sarah Constantin: Interesting. Yeah, I think that's, I think that's not a bad idea. I think if to the extent that there's anyone who's still actually suffering under this,

Ben Goldhaber: I certainly know a few people who made the promise and regret it now.

Sarah Constantin: Yeah. But are still doing that, but are trying to hold it. Yeah. Yeah. Interesting. Yeah that, that wouldn't be a bad idea. Part of the thing is that, This was not my, this was this was not my name to begin with. I'm, I'm not a, and I'm inter, I'm only intermittently e. And so a lot of what I think what I personally think about EA is filtered through a will.

If I were trying to do this, would I approve [00:57:00] of the way the movement has done it or something like that. But to a certain degree I do wanna put cards on table of yeah.

Divia Eden: That's not really your moral system. You're not trying to, that's not where I'm to do the most good.

Sarah Constantin: Yeah. I certainly know a lot of particular EAs who I think are doing great stuff.

And I'm, if I can have opinions generally, it's my the thing that I, the thing that I'm a little salty about is it has been surprising to me and a little bit frustrat, not frustrating. It's been surprising to me to the extent to which EA has tried to prestige and reputational launder.

What originally was, bloggers including people who are a little out of the mainstream, like Robin Hanson talking to each other and doing thought experiments and into something that, where what I hear about is like young people wondering whether they can get a job at an EA org because that is super prestigious.

Or Reporters [00:58:00] being shocked. Shocked that someone as controversial as Robin Hanson has ever been associated with the movement when it's on the public internet. And where people are like, so did you know that prominent EAs have discussed these controversial topics? My, I guess my moral opinion is you probably should be trying to hide it.

And it's stupid to think you could. And it's and like whatever you and your associates are and what you say and what you believe, you should can't own it. And let the people who don't like it, not like it kinda conclusions, they'll for better or for worse. And also, it's a weird thing about the world that it's.

Was even possible to get to the point where people are surprised that, like a lot of ea insiders really think that AI risk is the most important thing, [00:59:00] and that was a secret or like that all of the stuff that that leads to controversy, at least to controversy or whatever. Or like that a lot of VAs are that it's a very incestuous friend groupie people working together and living together and so on.

Like if that was going to be a Dan scandal it was an open secret, right? You probably shouldn't have tried so hard to appear like, Lovable to everybody, including people who wouldn't like the reality of your lives and the reality of your thoughts and the reality of your associates.

Ben Goldhaber: And yeah, something like the pr it's weird because as I'm like like putting this on, I'm like, yeah, there's definitely a pr, like a shininess Yeah. Thing with it that is like close to and like a may, maybe some type of line, but it's a little hard to know where the line is.

Yeah. Like, where does Yeah.

Sarah Constantin: That's not so much oh, you I like what's a lie? What's a, but it's very much I haven't been in the trying to [01:00:00] professionalize the side of it at all. I've been friends with people who are ea, I'll donate to give directly myself and I'm friends with people who are in the e ea world and so on and I've had people say that my stuff or my writing or whatever is relevant to ea causes and look, and that's fine. Seems right, but like I haven't I haven't been involved in the and I, like my friend Clara runs a magazine that is funded by the EA org orgs and it's like sort of an EA magazine, but also trying to be independent of it.

And I think it's a good thing. But yeah, this whole, there's a weird PR thing that I run the hell away from because I remember when it was, I remember when we were nerds who talked on the internet and some of the nerds were talking about sharing charitable giving. And some of the nerds were talking about ai and it wasn't.

It wasn't some citadel of cool[01:01:00] or or prestige. And it cer there certainly wasn't like a ton of outside money coming in this, these were people who were like doing things with their own savings. And I dunno, I got that. And and then this whole thing is, I don't know, I feel like

if public that which can be destroyed by public scrutiny, probably should be or should be at least, at the very least at secret, more efficiently. But I dunno. And then there's a whole sbf there where I never knew svf. Yeah. I I don't know how much of it could have been told ahead of time by knowing his character or whatnot at certainly.

It certainly brought a lot of negative scrutiny. I don't, I certainly don't think anyone who is a crypto rich guy should be assumed to be a fraudster or anything like that, but like it does, it definitely does seem like

you, if you're, yeah it it definitely seems like there had been, [01:02:00] that there was some kind of bubble optimizing for, like acquiring something from outside. And now it feels people are aware that can't go on indefinitely, that's probably healthy.

Divia Eden: Okay. There's more narrow concerns you had about like the cost effectiveness of saving a life numbers, but it's your broader thing is more like you're not really a fan of the more professionalized, more prestige optimized type of movements Yeah. In general. And now EA is more like that than it used to be, and you're not

Sarah Constantin: really a fan of that part.

Yeah. Like I had probably less like they wind up funding some things that I think are good. They did a lot of cool pandemic stuff. Aldi bio is really cool. It's not that there's, it's not that the ecosystem is a thing. It's like

the parts where it's literally just to coordinate the people, to move the money, to get the money, to go in, to get the student orgs, to get the people placed in the high places. It's just[01:03:00] really is that did somebody actually wanted to do that? Somebody spent their time doing that. Really? Okay.

Divia Eden: I just maybe takes us back to the power seeking as instrumental

Sarah Constantin: strategy.

Yeah, no I get it at intellectually, but like at some point, shouldn't someone's heart say I don't want to, I dunno.

Divia Eden: And a lot of people's did. But then some people's were like, no, this is the thing to do. Okay. Evidently,

Sarah Constantin: yeah.

Ben Goldhaber: Are there any movements or kinda like nascent ones you've seen maybe just like happening on the internet, do you think have the same kind of intellectual generativity of the earlier ea scenes?

Is there anything that you're like, oh, okay, this is where I'd bet on

Sarah Constantin: no, I do kinda keep my eye out for that, but I haven't. I, maybe I'm not looking in the right places. I haven't seen like a totally disjoint scene.

That's not a lot of the same people are people who are adjacent to the same people or whatever. That's not an academic discipline. People are creative because they are in a field. But that's different than something that just came out from [01:04:00] this group of people talking to each other just now.

So finding

Ben Goldhaber: uncorrelated ideas

Sarah Constantin: really hard. Yeah. That's a competitor. Have you, is there something that they think is, I

Ben Goldhaber: thought for a little while, and now maybe this is a spicy take. I thought progress studies was going to be a. Sister movement to EA and had some of the same qualities, but I now, like over the past year, am like a little less optimistic about it.

Maybe I just haven't seen it as much on Twitter. Maybe people are actually out doing things in the world and are just like not tweeting as much.

Maybe it's a good sign. But I remember being like, oh, I should bet on progress studies. And then now I'm like, just yeah, maybe not.

Sarah Constantin: I actually like progress studies. I hadn't even thought of them. Cause once again, it is adjacent, it does have some say. Totally. And, but I like it.

It's what it seems to me is small. Yeah. And what it seems to me is So I, I like when I, like when people like like Jason Crawford are [01:05:00] doing or ranton how are doing history of technology, history of engineering stuff and trying to popularize some stuff. And that seems cool.

And then I'm cautiously optimistic about a little bit of, let's have policy that doesn't suck.

Ben Goldhaber: Yeah, I was curious because you've written on the abundance agenda. I

Sarah Constantin: did a whole manifesto thing and then I tapped out cause I don't have the energy to keep doing policy. But like I haven't super been tracking yet.

I've seen people, for all the, people really don't like the Jones Act. Nothing has been budging on that. So like federal my friends, we our friends, we, yes he dipped his toe in policy and that's part of why I was excited for a minute of oh but he thought that was going to be, that was gonna be some kind of a national level think tank thing.

And then not so much. I've the Institute for Progress people seem [01:06:00] to like, have their hearts in the right place, but also, I guess it's early days of the act, but also like they know how to, there's a limited menu of things that they know how to do, where they're in their policy wheelhouse, and then a lot of the problems are just out of scope, but. Policy's hard policy's hard policy seems that way. And so I'm glad someone's trying. But but I don't have, I, but the only thing that's ha has been like a little bit of a beacon of hope is that ybi can work. Are things, yeah, I was just

Divia Eden: thinking that it just seem like the one thing that's actually getting traction and it's slow, but I think it is gonna be slow.

Yeah. And it seems to be actually getting laws passed. Yeah.

Sarah Constantin: And with YIMBY as an example there, there do seem to be like people who would like to say, let's do mb, but for more stuff. YIMBY, but for nepa what's that like? There are people who try it and they tried it in, I'm sorry, YBI,

Divia Eden: but for what's the thing you said?

Sarah Constantin: For [01:07:00] nepa for environmental review. For, oh, got it. Yeah. Yeah. And, I dunno. It just, the people who love that work are the people who have different opinions from us. That's a that's an underlying issue is the people who wanna be doing that with their lives, that there's already a bias make down

Divia Eden: cause.

And you say this as I, I think I've heard you identify as an endcap over the years. Yeah. Is that still about right? Yeah. And not too many endcaps who are like, I wanna spend my life devoted to making good policy.

Sarah Constantin: Yeah. That's, if you hope, if you think if it's weird being, being in Ancap because you because the entire world as it is and has been for all of history, has been in conflict with what you think it should be.

Yeah. I,

Divia Eden: for the record, I identify as pretty endcap too, so I Yeah, certainly

Ben Goldhaber: would. Yeah. We're turning it to an endcap podcast with our first interview being Perry Metzker, who I think also identifies that way.

Sarah Constantin: Right? [01:08:00] Yeah. So you're already I can say I, I think of things that way.

But it's not like I'm gonna get my wish.

Divia Eden: Yeah, that, that's my main issue with the label is like, what does it really mean? I'm like if I could wave a magic wand something that sounds interesting. Medieval Iceland, I don't know. Marginal changes towards people being able to create their own consensual government agreements seem good to me.

Like I, I don't know, it doesn't seem, there seems like something a little bit incoherent about it, but I don't

Sarah Constantin: have a better label. Yeah, exactly.

Ben Goldhaber: Yeah, if you were gonna wave that wand Sarah, is there like a single marginal change that you would be like, oh, this would unlock the most in the abundance agenda?

Sarah Constantin: Some kind of dramatic elimination in environmental review environmental permitting review like we do if you're talking about incremental stuff, there's better and worse because you still do harm somebody when you. When you pollute so if you didn't, if you don't have some kind of remediatory mechanism for that, the way in on campus down there would be. But if you don't have that built, and of course

Divia Eden: there's always, like the dream [01:09:00] in my head at least, is some sort of real liability law. Exactly.

Sarah Constantin: Everything is tortious everything, right?

You don't have police like there's ways around it. You can do it with insurance, you can do it with sorts. You can find ways to hold people accountable for harming others that doesn't depend on them having to ask permission for everything they do before they do it. But

Divia Eden: but we don't have that, my inner Robin Hansen is you could even use the prediction markets, the insurance companies could have the prediction markets to decide which things and the lawsuits would, be worth it and all

Sarah Constantin: that.

Yeah. But yeah, it seems like in, in terms of just dollars and impact it's building stuff. It's blocks and building stuff. There's a, there's in, in the biomedical space, there's an easy answer. It's repealed to fava Harris. What's that? It was only in the 1960s that drugs had to show efficacy as well as safety to pass them to of you.

Divia Eden: Yeah. This is, this was Jim O'Neil's platform when he [01:10:00] was, I don't know, it was at least in the news that he might be actually get to run the fda

Sarah Constantin: O'Neil. That would've been great. Yeah. Totally. So close. Yeah,

Divia Eden: I, so I guess I waiver between, and I don't know that much about it between oh, so close and but maybe the more, I don't know, defeatist, part of me is no, but it's, once he had the right idea, according to me, that of course I wasn't gonna work.

But then sometimes, sometimes good things do happen.

Sarah Constantin: And and he still couldn't get things quite the way they should. He still got fired. Yeah. Proposing too much in the right direction per perhaps, I don't know.

Divia Eden: But yeah, my, if I could, it would probably go wrong in a million ways, but if I get to make one change, it would be a housing thing.

And specifically what if, and feel free to put holes in this. The Supreme Court could come up with a decision that's no. Obviously all of this local housing stuff isn't unconstitutional interfering with interstate commerce, which I think it totally is. By the way. What if they could say that? I don't think it's realistic, [01:11:00] but that's my one if I get to pick something.

Sarah Constantin: Yeah. I don't understand the how interstate commerce has been interpreted in the law to know this is a legal argument. But we,

Divia Eden: I think it's not. It doesn't seem like it's always super principled because certainly they have said that somebody famously, I forget the decision that somebody growing a municipal pot plant for their own personal use and not selling it is interstate commerce.

Because I mean of true, of course it interacts with supply and demand, et cetera, whatever, but like affects surprises. Yeah. Yeah. But that somehow is interstate commerce, but like all of these local housing things that make it hard for people to move and get jobs and all of that somehow isn't, I don't know, I, that's why I think it's not more strained than existing constitutional logic, which doesn't mean it's,

Sarah Constantin: no, I see the, I see the thing here.

It's, yeah, pop.

Yeah. That would be cool.

Ben Goldhaber: I'll throw my in here now cause I have to, which would be, I don't know if it'd be the most good at this point, cuz I, I now wanna find that kind of like correct regulatory hack that like, unlocks [01:12:00] everything else. But the fda, delenda est, getting rid of the FDA still feels at least like the most emotionally satisfying victory right now.

Sarah Constantin: Yeah. Yeah. I have. Yeah. I, and I, that, that's probably the thing I've cared about for the longest. I think I was like fda yeah. Or even the ama and Dr. Short, I have thought the AMA was a problem. I was like in high school and making up fanfic about rogue doctors and stuff. Nice.

Not fanfic, but it thick, let's say. Yeah.

Divia Eden: But okay. So I also adjacent to this because we're talking about the endcap stuff, which is, I would assume for you partly a moral position, not just like a practical one. And we often on this podcast, to ask people about their moral systems and how that works.

And when we were talking about what to ask you a couple things in mind there. One was Ben, you pulled up one of Sarah's old less wrong posts about the player versus character thing. So the [01:13:00] question about how that relates to your moral system and also I, and I don't know how to quite put this in words, I'm sure, but like you also seem more than most people that I engage with to be protective of people, like human. I don't, human frailty, negative emotions, that sort of thing. And I don't know, those are just some potential starting points. But I'm interested in your moral system and how you relate to it potentially, with some of those things in mind.

Sarah Constantin: Yeah. So I, a lot of things are in flux to be to be totally honest. I feel like I'm still figuring out morality, but also to some extent, you can't put it off till tomorrow all the time because you have to live. Definitely at the at at the super concrete level where the semi concrete level where like the endcap step lives Standard stuff, like people have rights, don't violate their rights.

Their the who someone is should not matter to to what you are allowed to to like [01:14:00] boundaries you shouldn't cross. That kind of thing is pretty stable. And I I'm more or less I'm, I generally think of things in an egoist way. I generally think that it makes sense to start with who are you start with, I am me, here is what I want.

Here is the world I think I find myself in. And that probably all the stuff about the way you want to treat other people can fall out of that. All the truly valid stuff about how to treat other people. I have certainly found through life experience that when I have had to update in favor of, I need to treat people differently and it's a real thing, not just an imitative go with the flow social thing or believe it because you were told that thing, but oh, really?

No, I have to do this differently. It's because actually it is in my best interest actually. There's a right very straightforward mechanism by which if you mistreat people, it is not in your interest. And that is because people notice when you mistreat them and have reactions to it and they and they notice when you have [01:15:00] been twist mistreating other people and they have reactions to it and like you, yeah, okay, but what if I just hit it really well?

And what if I like counteracted all the reactions? Hypothetical. Yeah, you could just. Change of behavior that might, in like the, that the calculation doesn't necessarily come out in favor of be a really clever, bad guy. It might just be dope. Yeah. Which actually,

Divia Eden: a bit of a tangent maybe, but I'm curious.

This of course comes up in a different form in the AI discussion a lot, right? Yeah. Which is something like, because often people will make the argument like, look, the AI isn't going to fundamentally care about being nice or about people, and it won't because it will be probably, I, and I'm always afraid I'm not totally doing justice these things, but insofar as it is so much more powerful than the humans around it, than the instrumental reasons to be nice to people, like the egoist reasons won't apply.[01:16:00]

I'm curious if you wanna weigh in on that since you're putting it in those terms for people.

Sarah Constantin: That's something where, I'm not so sure my intuition had very much been that, that wouldn't apply to an ai that it wouldn't be nice instrumentally. But, that's partly because for the longest time I never seen anybody question that part of the argument.

That right there was a lot of doubt in my mind. Always was about like could he be a strong AI or and there was discussion unless wrong for the longest time, overcoming bias before of could we do something to make sure the AI didn't want to destroy us anymore?

Is that harm really hasn't. Talking about that. And, but somebody arguing with why wouldn't, why would it even want to disassemble us for reasons that weren't essentially hope or wishful fulfillment or something like that. Yeah, it's not been explored enough. So maybe there's a, maybe there's more to it than I initially expected.

Okay. But but I had certainly come in thinking look at us, look at the other animals on earth. We [01:17:00] cause extinctions, right? We don't have true trait with alien beings that are even not as alien to us, that are, that, share some genetic commonality with us. I don't know. But, and we certainly sometimes drive other groups of people.

Some groups of people drive other groups of people to extension. So Totally. Yeah. I dunno. But but,

so I don't quite. I don't quite know how these things shake out. When I

Divia Eden: think about Yeah you're more saying like in the context of your life, in context, it seems like actually reasons.

Sarah Constantin: Yeah. So yeah. The way I think about it is people do, people had a lot of choice about how they frame things. People can frame things as I, look at me.

I am so contented. I have my whole, I have all my shit figured out. Look at my enemy's open seas. And you know what, I'm willing to bet sometimes you cope and see, right? I know, I do. And I can't [01:18:00] really speak for other people, so there's a tendency to go in a confessional direction, which sometimes gives people the wrong idea, but I see it as leading by example.

I definitely know what I did. I'm res and I'm, and if I talk about what, what's the range of things I go through I'm taking responsibility for it. I'm not making it about the fact that somebody else in particular did something particularly terrible or whatever.

But yeah, sometimes I'm gonna cope and see sometimes something is gonna bug me. No, it is not realistic to to assume that you're never on the other side of that. And. I have a thing that's been percolating that I'm not sure how to think about, but it seems important about what is bad, but common.

And what can you do about something that is bad but common and it puts some limits on things. Let's say, once upon a time, not too long ago, chattel slavery was common and quite a few people owned slaves and that was bad. And it wasn't just a little bad. It was quite bad.[01:19:00] But if you actually tried to, let's say, kill everyone who owned a slave, that's a lot of people.

Maybe you can't do it. Maybe you don't want to do it because you know that's all. Large portion of the population or the economy or something like that. And no matter how evil slavery is, maybe you don't want that consequence. You wind up having certain considerations, you have to keep in mind just on the basis of the fact that it's common, even though common doesn't make it more morally good in some sense.

It doesn't, certainly doesn't mean that you, that it says something less bad about your character to do it. And then we talk about something that's, certainly a lot less bad than slavery, but I don't know, stretching the truth or various other things that are, on a more ordinary scale.

And you think about if you, oh, okay. You're you're, somebody's being very condemnatory about it or whatever. Okay would that generalize? Could you make that into a[01:20:00] in a, in, into a universal presumption? Could you actually would you have the stomach to carry it out if you did it consistently?

And if the answer is no, you're just dunking. You're just you're not being fair. Sometimes the answer is yes. Sometimes the answer is we should treat this much differently than we do almost all the time. And we and I want to live in a very different world regarding this kind of behavior.

But I think that's just a overall all where I wanna be mindful that's that's a thing you wanna keep in mind of this is a universal or a near universal, or within a certain context, a very common thing. Do you really wanna get rid of it? Every breaker? Do you know what you're asking?

Cuz under almost any cata presumption, Think about what you're asking is a fair ask. And in, in regards to, I think we had a discussion about panic. You and me. W I think that's right. Where we're both on the side of, sometimes panics. Okay. And like the prototypical example being something like originally a baby cries, yep. It's a literal [01:21:00] cry baby. And it's, and it should cry. It needs help That is its best way of getting help. It is like rational for it to cry, even though the baby can't think of it that way. And it's also like pretty much socially sustainable for babies to cry. Yes. Like most of the time, unless the thing has gotten to terribly wrong, we are, we, the people around them have the resources to be the baby's need.

And if the baby's crying because it's hungry there's. Something terribly wrong. Somebody can respond to that need and it's not going to take over everything or ruin things somehow that the baby cries.

Divia Eden: Yeah. I think this came up maybe in early Covid when there's I think, very annoying from discourse about people who are like, don't panic.

And then people were like of course we're not panicking. Panicking would never be the thing to do. We're merely evaluating the risks. And I think we were both like I maybe sometimes panicking is the thing to do.

Sarah Constantin: But there's a non [01:22:00] panic, there's a dope panic position, which is there things to do.

Sometimes they, they involve reacting to the risks. Sometimes they don't, but they never involve like going Ah. But you know what, you know when I've gone, ah, when I was awoken in the middle of the night by like someone in my house, some intruder in my house. And I went, ah, happened to, yeah.

Scary. In, in, in Berkeley and in Las Vegas. And, both the times the guy was like drunk or high or somebody and they had accidentally come to my house. Oh, I see. Okay. And I was asleep and I went no. And it woke up my husband and it was helpful to have him awake in that situation.

Yeah. And that's fine. That seems to be the system, the subconscious system functioning as expected. As it should. Now sometimes I wake up with a start for other reasons and I go, ah, because it's did I oversleep? Am I late for something? And there I wish I didn't have the impulse to panic.

But there are behaviors that whose purpose is, there are emotional behaviors whose purpose is to [01:23:00] enlist help. And there are times when it seems right and proper for people to behave in a way that is intended to enlist help because, They can get help that way and like the overall system can sustain them seeking help in that way.

And that's fine. And then we complain about people who seek more help and is really sustainable or who seek help in a way that isn't going to get them help. And there are failure modes Totally. But there is a seeking help thing That's fine. And this never panic thing doesn't work really.

And I think it's like more of a an over update on on some, on, on something that's sometimes true that they say is always true. Or maybe like in their situation they have learned that they can't go looking for help. That doesn't mean that nobody can,

Divia Eden: It certainly seems at least partly gendered, I imagine people's takes on this, because when I imagine being a man and pub panicking in [01:24:00] public, I imagine people being less sympathetic than if

Sarah Constantin: I do it as a woman.

Yeah. I think that's probably true.

Divia Eden: There, there are, that, that's it. I think there are more male coded ways to panic and it's complicated

Sarah Constantin: And all of that. But yeah, I was trying to think if there's a, if there are ways where I've seen panicking work for men.

But there probably are, don't

Ben Goldhaber: probably, probably like a louder kind in some sense. I feel like it tends to be more acceptable when tinged with anger, maybe.

Divia Eden: Or I think if and I do think they're less likely to do it than women probably, but I think it often works for a man who is having a medical emergency in public to panic.

I think people will help with that sort of thing, typically. Yeah. Yeah.

Sarah Constantin: Yeah. No, that's, that that's literally true. Help. I can't breathe. Yeah, that makes sense. Still works. Yeah. Yeah. And like I, I think even in the Covid thing what,

there was a period there, there was a period where people were like, it seemed like the thing to do was to try to become a all of a sudden. And and I did become a know-it-all about like very narrow stuff because that's, that, that's, that seemed like a good, [01:25:00] I a good idea. But not everyone is always a know-it-all.

Sometimes like being a bit lost or whatever is an honest and real and productive reaction and trying to jump to the end and be a know-it-all is like not good.

Divia Eden: Yeah. And this, I think this loops back to what you were saying earlier about Prestige seeking and polish is not really your thing because you think it is.

People being more authentic is basically better for spreading information, which is better for the right things happening.

Sarah Constantin: Yeah. Yeah. Now I like shiny stuff as much as the next guy. But ultimately I just keep thinking when people are misinformed, and when people are misinformed, things get worse.

And the more the more decisions flow through people having just a misguided impression of what's going on the more of a problem it is. And there's a lot of ways in which just going a just. A tiny bit of firsthand experience[01:26:00] compared to the stories people tell in public. Totally transforms what what you think about things. And part of this is oh wow I need to discount the stories people tell more and keep doing that. But also why is it that way? Isn't that maybe that isn't that kind of a shame, isn't it? Shouldn't maybe it's a utopian or whatever, but if all of really important stuff is going on behind closed doors, shouldn't more people tell what they've seen?

Yeah. Maybe after they've retired, maybe, just, but some, somehow when it's, when they can. Yeah. I really

Divia Eden: like your point about being important to think about what to do in the case of things that are bad but common. Cuz I do think there's a very. Very common impulse that I definitely relate to myself.

That's okay, this thing that people do seems objectively as bad. I don't, it's hard to name examples without, annoying specific people. But if I'm like, whatever I can pick on NIMBYs, I'm like, okay. So like people saying that, other people can't build houses near them, it [01:27:00] actually causes all these economic harms.

And it's like really in objective terms, it's as bad as these other things that we all recognize as bad. Therefore, let's all treat, the NIMBYs the way that we would treat. I don't know, like people who cheat on their tax. I don't know. That's not even something that's that sanctioned, but something like that where it doesn't really, it doesn't actually really transfer.

It doesn't work for someone typically to be like, okay, I have done some personal calculation that this moral harm is just as bad. And so now I'm gonna start a coalition to change the norms without really thinking about a bunch of other structural factors and how common here it is.

Sarah Constantin: Okay.

Yeah, no, this is a little like Scott's worst argument in the world. Yeah. Of taxation is theft. Yeah, it is. But also you're not going to get the IRS prosecuted for shop the way you would shoplifters. Is that kind of the thing you're pointing

Divia Eden: at? Yeah, totally. It's, yeah. That's a better example and one that people are, tend to be familiar with that I think it can be fine in a discussion to point out the way that really there are a lot of pretty tight analogies and maybe in [01:28:00] my moral system it's not actually different.

But it doesn't work for like a public morality. Right system to treat it that way. It's impractical.

Sarah Constantin: Yeah. And

Ben Goldhaber: this actually ties into a question I had Sarah, from something you had mentioned before. And the part of ties is the public morality bit cuz you were mentioning like a, like the virtues of like good behavior from a consequentialist frame.

It's interesting, it makes sense and resonates with me. But also does this then depend or should it like, push more towards a towards like setting up environments and cultures or like societies that tend to punish defectors

like where does this impact your views on what the kind of like space in the public square for morality and sanction is?

Sarah Constantin: Yeah. Yeah. It's complicated. One thing that I've thought about, one thing that I've written about is The way norms can fall outta game theory. And that's an interesting thing that I haven't seen popularized very much by other people besides me which is, we all know how the golden [01:29:00] rule can fall out of game theory.

Retaliating against those who attack is a stable strategy. In a way that never retaliating or always attacking everyone is aren't be friendly unless something somebody does, someone is unfriendly to you and then punish re retaliate against attacks. But that's actually not the best thing.

You've probably also heard the tit for tat for with forgiveness is good in a world where in a simulated model where people make ac people make accidental defections where people can. Attack, quote unquote. Where, so just going back one sec to, to clarify what's going on here.

These, this whole literature comes out of worlds in which of of evolutionary game theory where you imagine that there are lots of little bots floating around. The bots interact with each other. They can fight or they can they can cost each other points or they can not, or gain each other points.

They have algorithms they are following and they can reproduce [01:30:00] and and when they do well, they're reproduce in the gene pool. And when they don't. And then there are empirical results about which things tend to take over, which things don't. This is a whole it's, and people do run these experiments.

People run these experiments. They run, they have contests. And this is part of where certain results, like for with, for forget sense to win is a thing. They're even little online simulations where you can try it that are used to there's a there's a one, there's one of the web viral way back where it was about the evolution of trust.

I remember that. Yeah. It was one of these little simulations and it was fun to play with. So one of the things you find is that so tit for ca if you add in the assumption in these little games that sometimes these bots can unprovoked attack each other accidentally, like a small percentage of the time, then it becomes important to have strategies that can be forgiving that don't immediately retaliate.

Because here's what happens. You don't want to get if every reasonably successful bot. Will react [01:31:00] to being attacked by attacking back most of the time. We'll have some kind of defense and if you trigger each other's defense and just keep attacking each other all the time then you both burn your points down.

And so against other bots like yourself other bots following the same strategy, you use up all your re you use up your power in fighting and you diminish your numbers. It would be like if every time somebody bumped into you, you took out a knife, right? The people who the collection of people who did this would would rapidly dwindle, right?

So forgiveness is important, but then. Not very well popularized strategy called Pavlov starts to be a thing, and it's called Pavlov because it starts out friendly if it accidentally defects or attacks, and no retaliation is present. If it accidentally meets a pushover, it keeps pushing until it meets opposition. And then when it meets opposition, it backs off. So two [01:32:00] Pavlov's with each other can forgive accidents, and they're not pushovers, so they don't wind up getting creamed. I, but a Pavlov meets tit for tat with forgiveness, and it says, aha, a pushover. And it pushes, and eventually the tit for tax, for forgiveness bot stops forgiving and and fights back.

So it doesn't, so it's, it doesn't immediately die out, but it slowly but slowly Pavlov tends to gain. Over time against Ted for with forgiveness. Pa Pavlov is a very simple, doesn't even have memory of past things, doesn't keep track of what the other bot did to it two woos ago or whatever, but it tries to capitalize on opportunities to take advantage of another bot, right?

And that's and that kind of works. It doesn't work against hardcore tit for tat. No forgiveness ever, but hardcore tit for ta, no forgiveness ever tends to get locked into too much fighting. It's too it's too tough. It fights too [01:33:00] much. You get something nice wins, but nice.

As a pushover, how do you get out of this sort of and you have a triangle or breakfast pa, paper scissors kind of thing. What actually beats all of them or in, in most environments is. If you have a bot that has one more kind of memory, which is something that's like a norm, something that's like I, I could keep track of not only how I was treated in the last round, but whether I was in good standing or not.

Namely, if you attacked me that you attacked me after I attacked, as in was I punished for wrongdoing or did you attack me after I did after I cooperated, in which case you are an aggressor and at that retaliate against against aggressors and don't retaliate against, take my lumps when punished appropriately.

What they call like a tri strategy. What,

Divia Eden: even if you had attacked by accident and then the person attacks back, you won't then attack them next [01:34:00] time because cuz it makes sense that they did that.

Sarah Constantin: It ends the feud. It depending on what it itself did. And this is and this just is better.

And this is a difference between this is a qualitative difference between, let's say vendetta where they hit me, I hit them, they hit me, I hit them. We hit for forever forgiveness. We're sick of hi, hitting each other. Let's bury the hatchet. And a primitive, non-state version of law where you don't necess there.

These are bot the, these, there is no central bot that is different from any of the other bots. But they do it so this is a, this is an anarchist. This is David Freeman

Divia Eden: calls Feud Law,

Sarah Constantin: this is Feud Law. They can't, they do keep track of what happened. They don't have a, they don't have a king there is no king bot, but they do keep track of what happened, and they do treat a retaliation against an offense as different than an offense.

And that gives you an actual, sort of [01:35:00] empirical justification for why you would need an arm as opposed to merely having a preference or merely or merely doing something like Ted for tat or retaliating against. Sounds like there there's an actual, a value add to having a distinction between the line that you be, between this side, between the right and the wrong side of the line.

I dunno.

Divia Eden: This was partly in response to Ben, you asking about how to deal with integrity violations in society,

Sarah Constantin: right? Yeah. Now what does that tell me how to deal with a particular person in particular, like real world, in integrity violation? Not so much because there's a lot of stuff where I do wh my, I do find myself in the world without a norm of I can tell, I can, I know what you did.

I can think of things that would be better than it. I can think of things that would be worse than it. I'm not sure what you did warrants as a response, right? There's a lot of cases where, I dunno, that and where people argue about that [01:36:00] and it's like we, and it's become harder because I think.

It does feel like the world has become more fragmented and different people are living by different norms and moving between communities a lot. And code switching between different communities. And so is what you did bad. By what standard? There's more people aware the different worlds are going to have different standards.

I think. I think it was easier to just assume just be normal. What's normal now? I don't know. Yeah, that's right. Like

Divia Eden: we live in a pretty multicultural society.

Sarah Constantin: We live. Yeah. And and not just on traditional demo demographic. Totally.

Divia Eden: Totally. Like it could mean I don't know that if I am hanging out with the hippies, it's different than I am in a business context.

It's different from if I'm hanging out Yeah. With a bunch of homeschool

Sarah Constantin: moms. Yeah. And is this kind of a claim? Is this a lie? Is it fraud? Is it exaggeration? Is it putting your best foot forward? It's like a rustic [01:37:00]

Divia Eden: complication of,

Sarah Constantin: yeah. It's it's not that there's, it's not that there's no right answer here, but it is that a lot of actual individual cases, I don't know where to draw those lines and like how to and it may depend on where you are.

Do people understand how literally do people take these kinds of statements? What are they what are they leaning on and how heavily are they leaning? Audit it and, people really want this to be easy, but I don't find it easy. And I think it might actually not be easy. Yeah. Seems right,

Ben Goldhaber: but Yeah.

Makes a lot of sense. It tracks with one, one of our earlier guests was peeking up and watching who had similar points about. Shouldn't necessarily assume that learning these types of rules or norms in this case are going to that, that these will necessarily look like the kind of utilitarian calculus.

That it will probably be like deeply contextual in a certain way and like depending on the environment in the actual place we find ourselves.

Sarah Constantin: Yeah.

Divia Eden: Oh yeah. Extrapolate these contextual things is pretty fraught and I don't know, like maybe a [01:38:00] best guess sometimes absent any other information, but not necessarily worth more than

Sarah Constantin: that.

Yeah, and I rarely think of things can think of things where a literal utilitarian calculus is quite what you're doing. Like first of all, you of people are often using. More complex concepts like a lie or whatever, than that we don't try to translate it into a single bit of up and down even if, even people who believe that in principle you could, I don't relative I don't think I've ever seen them try to do it. When they've occasionally tried to, in a much, much more constrained space, like with GiveWell looking at papers and saying, and trying to give a a utility score to things that do different things to risk of death and various kind, various kinds of quality of life metrics and so on.

They come up with, you can see the spreadsheet. Different person, different people came up with totally different utility [01:39:00] calculations and, for the sake of convenience they averaged them. But right. You've got people who work together who are, pretty tight as close a cultural matches as you're hoping to get, and they're different.

So this sort of makes you the, there's, and maybe something

Divia Eden: about the making the high dimensional thing into a low dimensional thing or like a one dimensional thing can magnify, can make it seem like there's a bigger difference. Yeah.

Sarah Constantin: It's not stick to the higher dimensional. I think of it as it's not so much that I'm not against that I'm against utilitarianism as that utilitarianism isn't a thing.

You try, if someone tries to say let us calculate good fucking luck. Try five people and get them to calculate. They're not, they're gonna come up with radically different answers based on radically different assumptions. The exhortation to calculate is not what's doing the work there.

The stuff that's doing the work is the stuff that be utilitarianism doesn't answer the question about. It was our

Divia Eden: last guest. We had Ozzy going on and he, he made the case for yeah, we need much [01:40:00] more ambitious and better calculations, which is just, tie things together

Sarah Constantin: a little for yeah.

No, by if you're if you actually wanna salvage utilitarianism that's what you'd have to do. Totally. It's cool that someone's interested in it.

Ben Goldhaber: Yeah. It feels like you don't get caught in the middle of the road there. You really need to pick a side after, find a different system or to it much

Sarah Constantin: better.

Yeah. Yeah. If you're not interested, if you're thinking like utilitarianism says that no, it doesn't say half a lot. If you want, if you were trying to go on, what does utilitarianism say? I remember that there was discourse, there was shrimp discourse.

Oh yeah. Is it obvious that. Is it is it obvious that a glance that shrimp don't matter and or the shrimp do matter? Cause we

Divia Eden: see it? There is an a e a cause to reduce shrimp suffering, right? I don't know what context, but know that Yeah, there's actually

Sarah Constantin: a little organization there's a real organization to reduce shrimp suffering.

Farm shrimp apparently have their eyes removed. And this is not allegedly necessary to make tasty shrimp meat. So they're trying to [01:41:00] agitate for that not to happen. And there's differences of opinion about whether they can feel pain and blah, blah, blah. And and Perry thinks obviously shrimp are not a big deal.

And utilitarianism and there are people, there are EAs who are like obvi shrimp are according to certain utilitarian as utilitarian assumptions. Shrimp might be the biggest deal cause there's so many of them. Cause there are so many of them. And. Cards on tables. Shrimp are not a huge deal to me.

Like they're not, I do not see myself caring that much about shrimp. I don't see myself learning anything would cause like learning almost anything that would cause me to decide that shrimp are the most important thing. Do I think it's obvious they don't matter? I also hesitate to do that.

I also hesitate to be like I can see at a glance what matters morally and what doesn't.

Divia Eden: Yeah, it doesn't seem, yeah, I don't know. It doesn't seem great to me if a bunch of humans are causing a bunch of potential [01:42:00] suffering in an unnecessary

Sarah Constantin: way. I. Yeah, maybe you wanna avoid that.

I certainly don't think that the existence of an organization dedicated to something you don't think is a big deal, is a big deal. There

Divia Eden: it gets a little more complicated. I think, at least to some people, if, depending on how people relate to ea, if there's some sort of implicit claim that we might be doing the best possible thing.

Yeah. But yeah, certainly. Cause I think if, my guess is if it were framed as, look, I'm just, this is my personal hobby, then people would be more like, okay, that's weird. Interesting. Yeah. And that part of what makes it contentious is the, and it's in some objective sense, it's important.

Sarah Constantin: Yeah. Yeah.

This is where morality is so messy. To me, like I'm sure I am aware there are people who disagree, who are like, no, I actually know what's going on. I don't know what's going on. I do think that there's a thing where, Once where you certainly paint a target on your back as soon as you say you're doing the best thing in the world.[01:43:00]

You ha and maybe you should, because once, once you're trying to say that you're doing the best thing in the world and that this is the most important thing and that takes priority over everything else, then it's on you to show that and to be convincing. And then, and it's natural for people to say are you sure?

Ben Goldhaber: And there's some converse here where the way you put that I think is correct. And I'm also like, yeah, if you are trying to do the best thing in the world, you need to explore each hypothesis for what the best could be. So maybe, in fact, we need many more shrimp welfare projects. I don't know, it seems like there are many hypothesis for what the best thing could be.

And there should be many other groups doing that.

Sarah Constantin: Yeah. I'm sympathetic to scale sensitivity in a ai. I'm sympathetic to the idea that to some extent you pick your projects by, does it matter? Does it ma and one aspect of does it matter? Is [01:44:00] does it matter to a lot of people?

I'm gonna leave a side animal welfare cuz that's cuz that's a whole other can of worms. But even in my own life, I do think about things of, or I think about the excitingness or the impact or how interested I am in projects in part with reference to scale. And like I think if you I'm I, and I think that's pretty much fine.

But and there is the and. There is a sense in which, like we were talking earlier about what do you think would make the biggest difference in in policy and like housing? Why housing? Cuz I've seen I cuz I've seen estimates of how much of the economy is spent on housing or how much of a budget is spent on housing, or it's a quantitative argument.

Or how much of income inequality is driven by the most expensive housing getting more expensive. Or and there's a quantitative argument that oh, if you rank stuff that people spend money on some of the the most of it that is being, that is extra resources being spent on the same, on the same goods that [01:45:00] aren't getting any better is being put into housing.

And that's very slightly utilitarian ask. I'm not saying size doesn't matter. You're in scope sensitivity, even if I Yes. Scope and size and how much money, how many people these, the these things are part of the calculation. I think the the difficulty comes from taking that, taking a motivating assumption or taking a motivating example.

Taking something that you really are doing and going from hoping that it might be a theory to saying it is a theory. A general a generalization the one can make is that's where things get, get shady. Yeah. If people didn't, but then again, I. I think the motivating thing behind GiveWell before the, before EA was a thing was with Holden's discovery that, he was a hedge fund guy.

He wanted to give some money to charity. As he do, he started to investigate it the way he would [01:46:00] investigate it, investment. And then he realized that the quality of due diligence that he was accustomed to doing was unknown in the nonprofit world. That being Bridgewater, being a Bridgewater guy for giving his money to the poor made him ahead of the state of the art.

And that motivating inside makes a lot of sense to me as a thing to want to expand on. I don't think he was trying to do the best thing with his everything. I think he was doing, trying to do a good thing by his lights and noticing. That there is a sector of the world that is incompetent at something he knew how to do.

I see. And I think if there had been a continued, I if you follow that angle, you go to, you go down, there's a perverse in incentives in nonprofit land, you may go down direction of doing of, taking inspiration from the random ISAs and wanting higher standards.

The random ESAs, the DO [01:47:00] experiments on international development Oh yeah. Crowd. And which is developed pretty much independently of va despite some people trying to take credit for it. But it's but that whole thing makes sense. And. It doesn't lead you to the shrimp stuff or the animal stuff or the AI stuff.

It just it's just a, please bring some data and stop sucking. Much narrower thing. And it also doesn't give you the Peter Singer angle, which is also global development, but is is a moral utilitarian altruist angle of you're not giving anything, you should be giving something.

And that's also what gets people to be that's where you get the the various angles of anti ea discourse of are you pushing college students to commit large percentages of their earnings before they really know what they're getting into? That kind of thing.

That comes from the Peter singer angle, not from the. Gosh, nonprofits suck angle. The gosh, nonprofits suck. Angle is almost harmless, but probably it's the movement [01:48:00]

Divia Eden: also. Sorry, I, this is a little bit of a topic change, but I just remembered one more thing I wanted to ask you about explicitly before we're done.

Yeah. Speaking of things that, potentially affect a huge number of people I'd love to hear your latest thoughts on the anti-aging longevity stuff. Oh, sure. And if you have any predictions there, any, like how you think AI might affect the field or what other things that I wouldn't be tracking might be about to affect the field or anything like that.

Sarah Constantin: Top of my head, I have great enthusiasm for a couple of the the startups that have been around for a while. Bio age was one of the first mainstream biotech startups that was trying to take on aging. And I've been impressed with their ability to stay the course. They have some stuff entering the clinic that is actually trying to deal with aging, which is a lot to ask from an aging startup.

And I can explain why for a moment. And the thing [01:49:00] is that aging is not an indication per the F D A. People age of the age at their people age, one year per year. So they need to find something that they are going to say their drug does, and the most common thing is to pick some disease.

Which that they can claim to investors, claim to the world is on track to is related to the overall phenomenon of aging and age related disease, which are many of them, right? There are many. But sometimes I think that I think that's just not the case. Sometimes I think, so let's talk about cancer, for example.

Cancer is an age-related disease. The biggest risk factor for every kind of cancer, it's not smoking. Smoking's a big deal, it's age. The overall physiological changes of aging predisposed to cancer. So yeah, you could, if you did something about aging, you would probably not only treat, but [01:50:00] prevent cancer.

On the other hand, cancer is a crazy easy mode for drug development because untreated cancer almost always kills you. People the risk benefit trade-off for taking a cancer drug is very slanted in favor of of taking it. And chemo drugs are horrible. And if you're talking about something that would one day be an anti-aging approach, that would necessarily have to be something that like every 50 year old does, and then they don't keep getting worse as they get older.

Divia Eden: So you're saying the problem with cancer is that yes, aging would fix cancer, but they're much easier ways to target cancer that are not good anti-aging strategies at all. Giving someone chemotherapy is like the opposite of anti-aging

Sarah Constantin: strategy. Yeah, it's hard enough to get a good to get any drug through trial.

So if you're going after cancer, you're probably gonna get you prob the incentives, the natural pathway is to do something that will never be an aging drug. This is true for a lot of things. Sometimes they talk about progeria, like early aging, dis accelerated aging [01:51:00] genetic disorders.

And those are awful for the people who have them. And you develop a drug that does something about that tiny rare disease and those poor kids more of them are gonna survive to adulthood. That would be great. It's not really that it's not really very much human like typical aging.

Aging, typical aging in older adults. So I think what you're going to get out of those programs is a progeria drug, not an aging drug. Do you

Divia Eden: think it would make a big difference if the FDA said that aging is a specific thing that people can target? How

Sarah Constantin: though? How

Divia Eden: that's the question. What do you mean?

How like they could, I guess what I was imagining. Why do,

Sarah Constantin: how do you measure how little, how less, how much less you aged after a trial, what you're gonna do, what a lot of people want is a biomarker. A lot of people you would use a basket of biomarkers maybe.

Yeah. And those are gameable as shit.

Yeah. Okay. I looked through a bunch of them at one time and the best of them[01:52:00] There are a couple that are very predictive of mortality because there's just a very strong signal that when you're about to die of a lot of things that biomarker goes up. Apart from those, the best predictor of how long it, how lo how long away, how far away death is for you is is are frailty indices very simple things like can you get up from your seat without using your hands?

Or a six minute walk test or sometimes a basket of these I see functional physical performance measures. These things are not a big deal for us at our age, but like when you're 80, they're 80 year olds who can get up from their seat without using their hands and their ones who don't.

And that makes a big difference. I see. And. When people talk about, I have a methylation score. Methylation score, and I'm, I've got I have a drug that improves the computed age score or the age biomarker or whatever to that of a younger person or younger mouse. I'm like, call me when your score is as good as frailty.

Yeah, I see. Call me [01:53:00] when it's as good as looking at your mouse and seeing if it can walk and run. Now could you use frailty as your index?

Divia Eden: That actually makes sense.

Sarah Constantin: And I by way, just doing that by way just take, is taking on sarcopenia, which is a wonderful invention of a word, which is loss of muscle mass with h.

Okay. Interesting. In the eighties, somebody some physician, I forget his name right now said that we need to give this a name and we need to call it a disease and we need to take it seriously because it's y. Granny dies when she falls. And you don't it's not trivial.

It's not. And so I was just working on a sarcopenia drug. And the great thing about a sarcopenia drug that you're testing on older adults who have nothing wrong with them except frailty, is that you can imagine the entire population in that neck of the woods taking that drug. And you have to meet the safety standards that are appropriate to someone who isn't going to die in a month no matter what you do.

When a whenever somebody has something that [01:54:00] they Don't have a that that's a long shot. If it'll work at its side effect heavy, they like test it on panc stage four, pancreatic cancer or glioblastoma, those people are gonna die. That's a really fun, exciting trial where you might be able to see some results over a terrible baseline.

But that's not what we wanna see for aging. Or and another good target is older people are more susceptible to infectious disease. So bio age did a trial of something, of an immunos stimulant that reduces the risk of getting covid at the elderly. And the nice thing is that should probably generalize, it should probably not just reduce your risk of getting covid or the severity of covid, it should reduce the severity of other respiratory tract infections which are a major killer.

This is actually taking a dent out of deaths from old age. Matters to actual elderly people. And not some weird ass sub subpopulation. The other thing that's really cool is loyal and they're doing, oh, that's with the dogs, right? The dog and they're doing Yeah. [01:55:00] Lifespan. Lifespan.

The actual thing one cares about, also helps. Yeah. But but lifespan is very hard to gain. Then they're and they can do it faster because it's dogs.

Ben Goldhaber: And do you expect the kind of treatments in dogs to apply to humans? I know there's been a lot of criticisms of mouse study results.

Sarah Constantin: Oh, that's very different. Okay. So the fuckery and mouse mice goes far beyond from being a different species than us. Okay. Some of it is due to that, but some of it, so it's not

Divia Eden: wild type mice. This is part of the problem.

Sarah Constantin: Mouse studies. Let me, give me, give you, my mouse ran. First of all, they're inbred mice of particular strains.

They have been bred to reproduce early for our convenience which isn't great for them. They, so they're very, so they're, it also makes them essentially bred to get cancer like gangbusters. They are kept often in isolation without exercise wheels. They are basically in solitary [01:56:00] confinement.

Sometimes they're multiply housed, but they're fed mouse chow, which is bad for them. They're kept at cold temperatures, which also makes them get cancer all the time. And with all of that, when we test diseases, we don't wait for them to get a disease. The same disease.

Sometimes they don't even get the same disease as humans. We give them a, what they call a mouse model of the disease, which is usually not that. When we say we tested a cancer drug on, on, on this cancer in mice that mouse didn't get that, get old and get that kind of tumor in that part of their body that was implanted at them.

Which means the rest of them is healthy. Just in a very rough sense. All the things that go, that get fucked up along the way of you actu of an animal developing cancer naturally have not gotten fucked up in that bias. It's easy mode. It's easier to cure and implant the tumor mouse model of Parkinson's.

That mouse didn't get old and developed motor symptoms. That mouse was poisoned with a drug called M [01:57:00] P T P that gives the Parkinson's like symptoms symptoms that mess up some of the cells that are destroyed by actual Parkinson's disease, but not all of them. So it is less. Severe and less complex in general to produce a mouse model of the disease than the actual disease when a human or an animal gets it naturally.

Divia Eden: Or something like dog lifespan. This is like in real non artificial thing where people, this is the real

Sarah Constantin: test. These are companion animals. They are somebody's pets. Yeah. They're not kept in a bizarre environment. They're not bred for the sole purpose of being lab animals. They have whatever, unhealthy lifestyles that probably that like people have but not the incredibly bizarre, unhealthy hell world of being a lab animal.

And you're looking at lifespan, which is really hard to game. You're, if a dog is dead, you can't really say it's alive. And they're gonna have any a somewhat easier time with [01:58:00] approval because it's not humans, so they can get out there faster. Doesn't mean it'll work in humans.

There is a body size lifespan relationship across species. Bigger things tend to live longer, more case selected as in fewer babies later, bigger body. That whole thing tends to go with longer life. Humans are bad as case selected as you could get and a bit more so than docs.

So any kind of intervention that pushes you along an axis that humans are already at the far end of you would expect less of and yeah. That makes sense. Many kinds. And the couple of lifespan extending interventions that work across species like caloric restriction, that's like really well replicated.

The longer lift the species, the smaller percent of their lifespan caloric restriction will give them over baseline. It'll make a mouse live 30% longer. It'll like double lifespan of a fly or a worm or more sometimes, but a human. But a monkey[01:59:00] depending on the study will give you negligible to zero.

On lifespan, though various other method metric metrics of health will be a bit better. And I bet it also doesn't make a human live longer. Which is great because it's really not fun.

Ben Goldhaber: I think one thing that I've been thinking a little bit about too, , but I keep bringing back to that essay that you referenced about reality having a surprising amount of detail,

Obviously the ties into some of your concepts around morality. So like something like a deep appreciation for the degree of complexity of the world and trying to reject some of this simple, like armchair theorizing as being some like connecting thread. I'm not sure if like I, yeah, like I dunno if you have a sense of your worldview, but that's

Sarah Constantin: what I'm kinda capturing.

Yeah. So I think of it as there's people who are, so I'm in. I am an armchair theorist. I am a, I write, I talk I have opinions and I am acutely sympathetic to people who are like, don't shut up, get some, [02:00:00] get shit done. Those who those who those who know don't talk and you don't know shit.

You ain't shit. I'm acutely sympathetic to this yeah, you about,

Divia Eden: you're pro pontificating if they

Sarah Constantin: want to. And I like to pontificate, but I also want to make it, I kind of wanna try to make it not terrible. I want to have a responsiveness to the critique of but it doesn't work.

Or but that has nothing to do with real life. Or, I, and I wanna be cautious about that sort of thing. And Yeah, life gets in the way of of what you may think. And you're wrong a lot. No, really, you're wrong A lot. If you don't think you're wrong a lot, try a prediction market.

See and try be, try betting on manifold markets and see what your track record is like. And it's humbling. I do. And I'm I'm not God's gift to prediction markets. And that's where we're at. And I think I think if people were calibrated at if you actually put yourself to the, to if you try the same tests on everything [02:01:00] that you would, that and not just and not just turn the tough tests on other people and the easy ones on yourself.

You would have to, everybody you, it all comes out to nor turn to normality, but you wind up, I still have opinions, but they're, I really do have a sense that a lot of them are gonna be wrong. And I accept the ones that I've really had a lot of evidence of and I really have a lot of experience with.

And yeah. You wind up using a lot more words to express it than the shut up people. And hopefully learn a bit more. And I think you can sometimes do better than the shut up. Nobody knows shit, people you can know more than that, but it's a fairly valid critique.

And I think it's a critique that people often don't listen to once they are in the world of, they blog, they write they're on podcasts, they're, they're opinion people, but you do wanna keep in mind.

Divia Eden: Yeah. Thanks. That makes sense. And I think Ben, that's a pretty good. Pretty good thread that has been running through a lot of this that you managed to name, and I think we're about out of time, but thank you so much for coming on. I definitely think, yes, I thank, learned a [02:02:00] lot of concrete things, and again, I'm sur surprised.

I think it's a mistake. You haven't been on other podcasts yet, so I

Sarah Constantin: podcasts I enjoy. I enjoy podcasts. This is fun. We're happy to have discover at least. All right. Cool.

Divia Eden: That too. All right. Yeah. Yeah, so thanks for coming on and thanks so much. Where can people find you on the internet? Anything you wanna link to in the show notes?

Sarah Constantin: Sure, yeah. I have a it's just my name dot com. Cool. We'll make sure to link that. All right, cool. Thank you.

Divia Eden: I also recommend Sarah as a good Twitter follow if anyone is on Twitter. True.

Sarah Constantin: All right, cool. Thanks. See you. Bye.

0 Comments
Mutuals
Mutual Understanding Podcast
Seeking to understand the world views of our mutuals.
Listen on
Substack App
RSS Feed
Appears in episode
Ben Goldhaber