Infinite Loops 前天 22:26
Science, Complexity and Humanistic Computation (Ep. 277)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在日益复杂的现代社会中,我们如何理解世界。文章强调了口头传统在科学知识传承中的重要性,并指出认知多样性比人口统计学多样性更能推动创新。作者还回顾了被遗忘的技术史上的创新,并对未来的“Maxis 2.0”进行了展望。讨论内容涵盖了科幻的文化影响、知识的局限性以及保持谦逊的重要性。文章还深入探讨了科学和技术领域中存在的“口头传统”和隐性知识,以及大型语言模型在打破学科壁垒、加速知识传播方面的作用与潜在的权衡。

💡 口头传统的重要性:文章指出,科学和技术领域的进步不仅体现在书面文献中,还大量依赖于科学家和技术专家之间的口头交流和非正式知识传递。会议、深夜的交流以及与资深研究者的对话,往往能揭示出比纸面文献更前沿、更深入的见解,甚至隐藏着关键的技术细节和创新方向。这种“口头传统”是知识体系中不可或缺的一部分,维系着学科的活力和发展。

🚀 认知多样性是创新的关键:作者强调,真正的团队优势在于认知多样性,即成员在思维方式、解决问题的方法以及看待世界角度上的差异,而非仅仅是人口统计学上的特征。拥有不同认知背景的团队更能激发创新思维,找到更全面的解决方案,从而在面对复杂问题时表现更出色。

⏳ 技术史的启示与“Maxis 2.0”的构想:文章回顾了早期计算机科学和科幻作品中对人工智能、模拟等概念的探索,指出许多当前热门话题在几十年前就已经被深入讨论过。作者还提到了游戏公司Maxis(如SimCity)在结合复杂科学与娱乐方面的独特贡献,并对其能否出现“Maxis 2.0”进行了展望,认为通过新的模拟玩具来帮助人们理解复杂系统仍有可能性。

🌐 复杂世界的“纠缠”与信息获取的挑战:引用丹尼·希利斯(Danny Hillis)的观点,文章描述了现代世界高度互联互通的“纠缠”状态,这使得完全理解变得困难。作者通过Apple Watch的例子说明,现代科技的复杂性往往被“屏蔽”起来,导致人们对深层运作原理的认知出现偏差。这凸显了在信息爆炸时代,如何有效、全面地获取和理解知识的挑战。

🧠 大型语言模型的作用与潜在的“意外发现”:文章探讨了大型语言模型在打破学科壁垒、克服专业术语障碍方面的巨大潜力,能够帮助人们快速合成不同领域的知识。然而,作者也指出,过度依赖模型的快速合成可能会牺牲掉在深入钻研传统资料(如翻阅百科全书或图书馆藏书)过程中可能出现的“意外发现”和跨领域联想的机会,这是一种需要权衡的得失。

Sam Arbesman, complexity scientist, author of The Magic of Code, and scientist in residence at Lux Capital, joins the show to explore how we navigate an increasingly complex world that often exceeds human comprehension.

We dive into the oral traditions that preserve crucial scientific knowledge, why cognitive diversity trumps demographic diversity, the forgotten innovations hiding in technological history, and Sam's vision for “Maxis 2.0.”

This conversation had everything—from science fiction’s cultural impact to the philosophy of intellectual humility. Sam and I found common ground in how we think about complex systems, the importance of historical context, and why saying “I don’t know” is often the wisest thing a thinker can say.

I hope you enjoy it as much as I did. We’ve shared some highlights below, together with links & a full transcript. As always, if you like what you hear/read, please leave a comment or drop us a review on your provider of choice.

— Jim

Subscribe to The OSVerse to receive your FREE copy of The Infinite Loops Canon: 100 Timeless Books (That You Probably Haven’t Read) 👇👇

Links


Our Substack is growing! Subscribe below for more brain-tickling content designed to make you go, "Hmm that’s interesting!

Subscribe now


Highlights

The Richness of Humanity

“So certainly I think a lot of the advances are around what we've done for society. Since the Enlightenment, these have been unbelievable in terms of finding ways to allow people who are incredibly different to operate together and maximize human flourishing. That's great, and that's the end goal. But we also have to recognize that humans are humans and these things are always going to be relevant. So for me, it's less about changing humans. And yeah, maybe there's some genetic things we can do at that point. I'm not really sure we are humans anymore if we're kind of changing some of these things. For me, I kind of have this deeply sort of humanistic approach to the world where I like being human. I like contending with sort of the weirdness and richness of humanity, but still trying to make me kind of the best version of myself. I think if we change ourselves too much, we're not the best versions of ourselves. We're some other version, which could be good, could be interesting. Maybe that's an experiment worth running. I'm kind of arguing against myself right now, but I still feel deeply that humanity—and yeah, for all of its goodness and badness and all the weirdness of the human OS, there's something worth preserving, taming, but also really leaning into. And so, yeah, I just want to kind of be more aware of those deeply human features and recognize them and rejoice in them, but also kind of make us the best versions of those things.”

Revisiting Tech History

“From the moment people made digital computers, they were thinking around things around artificial intelligence and simulation and certain ideas around biology and artificial life. These things are not new. And so trying to understand what people were thinking about and then recast it with, ‘Okay, we might have some new ideas, we might just have better computational power. Can we actually revisit those kinds of things and rediscover them?’ So for example, right now when we're talking about certain ideas around AI and unanticipated consequences or certain things around alignment or work and meaning—these are not new topics. You look at Norbert Wiener, the developer of cybernetics, and he had—there's this great—I think it's a collection of his, a modified collection of some speeches he gave called God and Golem, Inc. And in it, he talks about exactly all those topics. And the thing is, it wasn't even just in this weird, esoteric area of cybernetics, if you also look at the 1960s, the original Star Trek, there was an episode called "The Ultimate Computer" that dealt with basically all of these issues. I watched it somewhat recently and I was just blown away by how they anticipated the entire conversation that we're having now. So these kinds of things—looking at these kinds of questions and both the questions we're having, the technologies that people have used, how people have engaged or interacted with technology, I think these kinds of things really enrich how we think about history.”

The Entangled World

“…the computer scientist Danny Hillis, he's written about how we've moved from the enlightenment when we kind of apply our rationality to understand the world around us to the entanglement where everything's so hopelessly interconnected we can no longer fully understand it. And that's clearly the world that we're living in. But for many people, we remain in ignorance and sometimes willful ignorance of this kind of thing. And so a number of years ago, when the Apple Watch first came out, there was this great quote I found in—it was a Wall Street Journal article. I think it was the style section about whether or not people are still going to wear mechanical watches. And of course people are still going to wear mechanical watches. But this one guy was like, ‘Oh, yeah, of course I want to wear a mechanical watch. I think of how sophisticated and intricate it is, as opposed to a smart watch, which is just a chip.’ And of course, ‘just a chip’—these things are orders of magnitude more complex, but we've been shielded from it. And I think that's partly the problem.”

A New Maxis

“I mentioned earlier SimCity. So SimCity was made by the company Maxis. It was made by Will Wright. He developed this company Maxis that made SimCity and SimEarth, The Sims. And the heyday of Maxis was kind of early to mid-90s and it was this weird moment when Maxis could be a game company that also was playing in the realm of complexity science and other weird sciences, but also building these strange things that were not actually games but were kind of just toys that were teaching people how to understand all these different models about the world. And so, and I've been—and I write about this periodically and I'm talking to people about what would it mean? And is it even possible for there to be a Maxis 2.0? Could there be another company that built these—I don't know—new simulation toys? Or was the bridge between the gaming world and various scientific domains or kind of other esoteric fields or what. And was it maybe just this weird moment in time in the 90s that allowed this kind of thing to happen? I don't think that's true. I think there might be a possibility for a Maxis 2.0.”


Reading List


🤖 Machine-Generated Transcript

Episode Introduction

Jim O'Shaughnessy: Well, hello, everyone. It's Jim O'Shaughnessy with yet another Infinite Loops. I was really looking forward to today's guest because it seems like we are traveling in very similar circles, and yet Sam Arbesman and I have not met one another. Sam, I'm a little daunted, to be honest, to be even talking to you, because I don't think I'm smart enough to even come up with good questions.

Sam is a complexity scientist and writer obsessed with seemingly unrelated ideas covering science and technology—actually not covering, connecting them. He's the author of three books: Overcomplicated: Technology at the Limits of Comprehension, The Half-Life of Facts, and his newest, The Magical Life of Code. He's also—

I mean, goodness, I'm reminded of the Mark Twain story where the guy went on and on introducing him, and it took 40 minutes, you know, doing all the things that Twain had accomplished. And he finally got to Twain, and Twain said, "Well, I've been left five minutes to make my remarks, so I will give my address." And then he gave his address where he lived in Hartford.

But I mean, honestly, you're the scientist in residence at Lux Capital. You have a PhD in computational biology. And I love your origin story too—you're a superhero in my eyes.

Sam Arbesman: That is too kind.

Sam's Origin Story: Science Fiction Foundation

Jim O'Shaughnessy: And your origin story is fabulous in that your grandfather, who is a retired dentist and artist, gave you a huge bag of science fiction books to take with you to summer camp. And Dune was still being serialized then. That is really cool. What happened? When did your mind explode? Tell us the story.

Sam Arbesman: Yeah, so my grandfather—I mean, he's been reading science fiction since basically the modern dawn of the genre. He did not give me the serialized versions of Dune. That was well before my time. But he definitely read Dune when it was in a magazine. He gave me his initial copy of the Foundation trilogy. So I have that one. I still have it. And he was a very long-time subscriber to Analog Science Fiction. In fact, I think it had a different name at some point. I'm sure he probably subscribed to it when it was the original title.

And what he would do is after he would read all the issues, he would then give me all the old ones in shopping bags and then say, "Okay, go take these." And I would take them to summer camp and read them. And yeah, it was between those magazines and him mentioning all these other reading suggestions. Yeah, it was amazing. It was just kind of wild to see especially—and I think he introduced me to a lot of the Golden Age sci-fi like Asimov and Heinlein and Arthur C. Clarke and that time period as well as later time periods as well.

And it was just fantastic to see kind of how people were envisioning what the future meant. And the truth is actually—so in the Analog science fiction magazines, in addition to short stories and novellas and things like that, there were also essays. There would be this—I think it was called the Science Fact essay. And I remember reading those because it was people talking about just weird things kind of at the edges of science. Someone would be like, "Oh, here's some interesting theory on how to actually make a warp drive a real thing." And then I devoured that. And then some other person talking about some weird moment in history during the Middle Ages where some weird thing happened and maybe that meant something around aliens.

And of course it was super speculative and super weird, but it just opened my eyes to this collapsing of science and fiction and thinking about the future and history and all these different ideas. And yeah, it was incredibly eye-opening.

And actually as a fun thing, also related to my grandfather. So in addition to reading sci-fi, he also read science—science fact as well, a lot. And he—so I remember when I was little, he would always have Popular Science magazine and I would, when I was over at their house, I would read it periodically. And later on he mentioned to me, just in passing, he's like, "Oh, yeah, I started reading it when I was in high school." And then I realized at that point he had been reading it for, I think, over 70 years or whatever it was.

And I had a friend at the time who was working for Popular Science. And I emailed him, I said, "I think I found your longest—the oldest, longest living reader of the magazine." And they ended up actually including him in the magazine in one later issue, which was fantastic. So that was unbelievable.

The Power of Varied Interests and Serendipitous Discovery

Jim O'Shaughnessy: I love it. You know, it seems to me as I talk to people who have incredibly varied interests, that seems to be part of the tale. I also kind of grew up in a house that was just filled with books on very different subjects. And you know, I would, like any kid, pester my dad: "Why? Hey, but why does this—why?" And he would just—I actually took it from my own kids—he would just point to the bookshelf and he'd say, "Go look it up in there."

And what I found, because he had very varied interests, one thing always led to another. And by that I mean, if I was—one time, when I was really bored, he brought me in to where he had all his books, his library, and he pointed at the Encyclopedia Britannica. I think it was vintage 1960, whatever. And he said, "Read that." And honestly, that was maybe the coolest thing that happened when I was young, because you can't help but be like, "Oh, wow, I got to learn more about this now."

It's easier today, right? Because I'm taking a stab at writing my first fiction book. And honestly, I could not have done the research without large language models. I would have needed a staff of historians, people versed in science and in technology and all of that. And I find one of their abilities that I use a lot is the ability to synthesize information from very different fields.

That was one of the reasons I was so excited to talk to you, because that's what you do. That's your day job. I would love it if that was my day job. What are your thoughts on where we are right now? We're kind of in medias res, right, with where things are going. But, you know, as I was getting ready, Unix, for example, has been called the Epic of Gilgamesh for programmers because there's an oral tradition, right, where a hacker could probably recreate it from scratch. That isn't going to happen with large language models. Or am I wrong? Do the scientists and innovators of today, do they have a tradition that keeps that knowledge building and going forward?

The Oral Tradition in Science and Technology

Sam Arbesman: I think so. I mean, when you look at how scientists and technologists do what they do, oftentimes I think people think that if you just look at the scientific literature, then you'll understand, "Okay, here is the current state of the conversation around some scientific discovery or certain fields," or you'll kind of understand, "Okay, these people are working on this thing, these other people are working on this other thing." And that's definitely true. I think looking at the literature, you can actually understand that.

But the truth is, and my sense is that there's often a lot of implicit knowledge in how scientists and technologists are doing what they're doing. And because the truth is, when you look at scientific papers, oftentimes that's several years out of date because they worked on it two years ago and it takes time for it to be published. You have to really talk to the scientists. And it's the kind of thing where really understanding what is happening in a scientific field or the kinds of techniques that really have the ability to move forward—you're not going to learn that from reading the papers. You're going to learn it by talking to the scientists at a conference, late at night at the bar. That's where you kind of learn the different things.

And so I think to that extent there definitely is a certain amount of this oral tradition and sort of a community that passes things along. And you'll see this. It could be in a situation where there might be a materials and methods section in a paper. And when you dig down into it and you actually talk to the scientists, they're like, "Oh yeah, really, the only person who knows actually what's going on is some postdoc who maybe was there several years ago." There's very much like there's the keeper of the tradition who might be still in the lab, who might not be. And so there definitely is that kind of thing.

And actually going back to what you were saying with language models and being able to synthesize knowledge, I definitely think one of the powerful aspects of them is the ability to overcome jargon barriers. Because actually one of the things—when you take one step outside of a field, you don't even know the kinds of questions and even the words and terms to search for, and language models help you overcome that.

And so I remember this—I think I tell the story in The Half-Life of Facts. But when I was doing my own postdoc, me and another postdoc were working on some research and we had to cluster data in some specific way and we needed some technique and we couldn't figure it out. We did some searching like, "Okay, we'll make something and we'll invent a technique. It's not going to be very good, but it'll be good enough." And then we decided, "Why don't we just, before we do this, just talk to the statistician down the hall."

And then I feel like within under 60 seconds he told us exactly what we needed. And it was the thing where we didn't even know what to search for. And so language models really help with that, overcoming jargon barriers and kind of collapsing things together so people are not constantly reinventing things.

For me, the downside sometimes though is in the rapid synthesis and overcoming jargon barriers, you sometimes lose some of that serendipity. So when you have to look in an encyclopedia or dive into the stacks and find something, you also find a whole bunch of other things that you couldn't have anticipated and didn't expect. And that can in turn lead you to other things. And so for me, there's always a trade-off there.

Cognitive Diversity and Remote vs In-Person Work

Jim O'Shaughnessy: Yeah, I think there's always trade-offs in every new innovation and the way we use it. You know, McLuhan famously said we fashion our tools and then they fashion us. But one of the things that you mentioned—we're finding at O'Shaughnessy Ventures is very true, and that is the benefits of having a cognitively diverse team. I think they got it wrong when it was DEI and it was not about cognition. It was about what color is your hair, are you bald or are you not? Are you old or young, are you black or white? I think that those are the wrong ways to divide people. And the whole labeling—well, that would get me on another rant that isn't going to be germane here.

But listening to the story going down the hall to the statistician, we find that when we get together in real life, all of those wonderful serendipities happen at 10x when we're just doing async communication with one another. So on the one hand, I'm all in favor because the Internet has basically collapsed geography, time and space, right. And you can be wherever and your colleague can be in Mumbai and you could still work together. But now it's creeping in more and more, right, that a lot of these exchanges—as you say, at the bar at night, right—how would you optimize for that? What kind of setup would you think would be close to ideal to be able to do both?

Sam Arbesman: Yeah. Oh, that's a good question. I mean, well, one thing I will say—when it comes to bringing people from different diverse domains and different fields, one of the interesting things, I think people study this in relationship to patents where they found that people coming from different fields—it was not one of these things where on average they were always going to be better. It was kind of, they actually had a higher variance. So when they succeeded, much better than would be expected. But then they also failed a lot. And so there's a certain art to figuring out the right way to kind of balance these things. And I think that's kind of also what you're talking about. How do we also balance bringing people from lots of different areas, geographic areas, allowing people to work remotely, but also kind of in person.

And for me, I mean there's definitely a lot of research showing that in-person interaction really does have a certain amount of magic in terms of these unexpected considerations and interactions and this kind of magical information flow in a way that you would never otherwise expect. That being said, I mean, I've actually been remote for probably over a decade. So I definitely think there is something to be said for being able to actually be successful even in the absence of being in an office.

That being said, I mean, when I go to the Lux offices and I spend a lot of my time just talking with the other folks at Lux because I want to kind of drink in as much serendipity and things like that as possible. But I do think there—I mean, whether or not you're in person or remote, you have to kind of cultivate that unexpectedness because there are many situations where you're in person and everyone is in person and they're all just with their headphones on, kind of just doing their thing, sometimes interacting with each other even though they're a few feet away, but entirely on the computer. And so that obviously is not doing what it should be doing in terms of those serendipitous and unexpected connections.

And so I think there's a way of kind of crafting—almost like the information diet that you get that feels that's kind of this combination of things that you need as well as things that are unexpected. And which in many ways—and now I'm kind of thinking on the fly and kind of spitballing, but in many ways that kind of parallels the way we think about innovation more broadly, which is anything that's new and creative is this kind of combination of things that have come before. So it's kind of this balance between expectedness and unexpectedness. And I think that's—and so in terms of how innovation works, that's really what it is. And so how do we think about doing the same kind of thing in how we—in our work, actually kind of balance the expectedness of "we just need the information flow to actually get our work done" as well as kind of the unexpectedness.

And that—I mean, this is a very long way of saying I can kind of think about that problem. I'm not really sure I have the answer, but I think I would say—and maybe this is just my bias towards working remote—I think it can be done whether or not you're in person or not. It really is just a matter of being very conscious of these kinds of things. And so when I have long conversations with people over Zoom or whatever it is, we can have lots of different directions, lots of different conversations, and it doesn't really matter as much whether or not we're in person or not. It's much more about making the space for undirected interaction.

And maybe that's what it is. And so for me, I love just reaching out to interesting people and saying, "Let's just chat." And I'll be very upfront—when we get on Zoom, I'll say, "I'm not really sure I actually have an agenda for this kind of thing. I really just—I'm happy to tell you about my own background. Here's the things I'm thinking about and we just kind of go from there." And then sometimes those are the most exciting conversations. And so I think you can still do it, whether or not you're in person or remote. You have to just make that space and make it very explicit. Let's have that undirected kind of exploration.

Jim O'Shaughnessy: You know, that is kind of the conclusion that I'm coming to. Everything is a work in progress, right. But I agree that unstructured is key. In other words, we often just get locked into antiquated ways of doing things. What you mentioned—what is the agenda? Right. I've learned probably some of the coolest things that I know now through unstructured conversations with people and what we're trying to do right now, to your point about the people being right next to each other but with the earbuds in and texting each other on the computer.

We are going to be doing more in-person gatherings, but they are going to be explicitly unstructured. You know, when we have an annual meeting of all our fellows and our teammates and everything, and what we've learned—this will be the third year we're doing it—is the first year was very structured. The fellows all spoke individually about what they were working on. The teammates from the various verticals we have were just on that and it was great. But nothing like the second time around when we were like, "You know what, why don't we just have this three-hour block of time where everyone's together and we just—you can break into groups, you could do whatever you want."

And that worked so well that, you know, the third year was even more of that and the one we have coming up will have the same kind of agenda-less agenda. I know that's a weird way of putting it, but it really does seem to work.

I also want to—I'm fascinated by this idea of open-endedness in systems. Systems that continuously produce interesting artifacts on their own, right. Open-ended evolution, civilization, are your kind of archetypical ones. What would it indicate to you in the current ecosystem of, say, large language models that they've truly crossed over into open-endedness?

Open-Endedness and Large Language Models

Sam Arbesman: So yeah, with open-endedness, I mean, I think the hallmarks are kind of this recombination in ways that you would not expect. Which I mean also is related to kind of this unexpectedness—there's kind of this unexpected recombination that feels to me sort of like the open-endedness. And I guess maybe, I mean really the hallmark of it is when you run these things for long enough, do you keep on getting new interesting things?

Because I mean with computational evolution or evolutionary computation kind of models, these things are very sophisticated. But by and large, especially if you're optimizing for a certain thing, you optimize toward that and then they kind of stop after a certain amount of time. And whether it's genetic evolution or genetic programming or other kinds of techniques, they're not necessarily going to continue generating new things. You kind of generate a certain—there might be a certain burst or maybe it kind of plateaus and there's another burst, but eventually it kind of stops. And same thing with civilization. We keep on getting new things, we keep on getting new technologies and new ideas. And that's amazing. And that's what we want.

With language models, I think you would want the same kind of thing. And I wonder whether or not—I mean the language models we have right now don't necessarily feel that open-ended to me. And so maybe that's just a matter of taste. But I also think maybe we just haven't run the experiment for long enough. I feel like this is one of those kinds of things, especially when it comes to evolution in computers. A lot of the times when we've done these kinds of experiments, we just don't run them that long relative to the eons of true actual natural evolution. And so we don't even know. Obviously we have a good sense of maybe these things have stopped.

But when it comes to language models interacting with humans or doing kinds of things, maybe—they're still really new. Certainly whether it's on a civilizational scale, on a technological scale, on a biological scale, these things, they've been around for less than an eye blink. And so to say whether or not we really understand their true open-endedness, I feel like I'm not really sure we know the answer yet.

And so to kind of say either one is premature. Now of course, we keep on developing these kinds of things. And this is also just related to the fact that, I mean anytime you say "oh, humans can do this and humans can be open-ended or whatever and these large language models cannot," of course, 10 minutes later you find out that "oh actually these things can do whatever the things we thought was only unique to humans." And so we kind of have to have a certain amount of humility there.

But I think when it comes to language models interacting with each other and imagine also even just a civilization of large language models all interacting—right now, we don't really have the computational capacity for doing, for even running those kinds of experiments or maybe not necessarily running them at scale. And so yeah, I'm not really sure we know and I think that's kind of exciting that we just haven't truly run that experiment yet and to know whether or not they're entirely open-ended. I imagine there are many people, I think, who would say we have good reason to believe they're not quite there yet. But I think it's exciting that there could be so many experiments we have yet to run.

Evolutionary Development and Understanding New Things

Jim O'Shaughnessy: I could not agree with you more. My friend David Ha, who you might know his work, when he broke off to start his new company, he was basically just obsessed by the idea that nobody was doing evolutionary development of large language models. I was talking to him once and he explained the whole thing to me. I understood maybe 40% of what he said to me. But I was really truly intrigued by that.

And then when I was getting ready to chat with you, your idea that we should study messy biological systems really struck me as a good way to proceed in terms of having more of this development because I agree with you, we're probably not there yet. And there's that great quote about it is really dangerous to understand new things too quickly because we probably have not understood them and it will lead us to very bad things by saying, "Oh yeah, I got that, this is the way those work and we can shut the case on that." I think that's one of the kind of bugs in human OS that—

Sam Arbesman: Right.

Jim O'Shaughnessy: I think we're far too quick to believe that we know everything. And I have the opposite suspicion. My suspicion is we, you know, to Edison's great quote, we don't know one-half of one percent of a millionth of things. But how do you structure work in that kind of environment? How do you guard against this sort of—as I was getting ready, I was going through the World3 global system dynamic model developed at MIT which you've talked about in 1972 that led to the Limits to Growth. And of course, I thought instantly of Malthus on population paper from 1798.

And he got the math of the population right, right. If you looked at the growth of population, Malthus nailed it. What he didn't nail, and I view this as a bug in human OS—he thought that the ability to feed that population was fixed, right.

Sam Arbesman: Right.

Jim O'Shaughnessy: And so he made the eminently linear and understandable argument, "Hey, we're going to hit a zone where all we'll know is famine and death if we keep going at this level of population." And he got sucked into what David Deutsch would say was the—"Hey, you don't know what we haven't discovered yet," right. And you don't know that the Haber-Bosch process is going to double disposable nitrogen and be able to have the population go from 1 to 8 billion and counting.

And then I was thinking about the World3 thing and its narrative arcs. You know, business as usual versus stabilized world. There seems to be again—the back-test they did on that, on what they projected, found that they were right on population, on industrial output, but they missed the big discontinuities that weren't expected, right. That were kind of black swans, like the energy shocks. They didn't think that a cartel might say, "You know what, we're going to embargo the United States and cause an oil crisis." They didn't anticipate the breakup of the USSR and on. Right.

So it's that part of the modeling that I find fascinating. Really, how do you design a model? If I asked you—okay, I want you to develop a model that takes this into account and takes, you know, what was Rumsfeld's quote? Unknown unknowns, right.

Sam Arbesman: Right. Yeah, the knowns. Yeah, right.

Jim O'Shaughnessy: But I think that, you know, everyone makes fun of him for that, but I—

Sam Arbesman: There's a lot of insight there.

Jim O'Shaughnessy: Absolutely.

Sam Arbesman: Right.

Jim O'Shaughnessy: The unknown unknowns are the ones that come and bite you in the ass. How would you model for that?

Models, Simplifications, and Biological Approaches

Sam Arbesman: Yeah, I mean, so when I think about—and there's the quote of "all models are wrong, but some are useful," right. So every model is going to be a simplification of reality. And the question becomes, what do we want it for? Because sometimes we just purely want it for prediction. So with weather modeling, oftentimes we just throw more and more complexity into these things and they actually work and they've actually worked better and better over time. We kind of have this intuitive sense, "Oh yeah, weather prediction is not so great." It's actually been slowly but surely getting better quite a bit over the past several decades.

But when it comes to necessarily certain ideas around intuitive understanding, trying to put the entire weather model in your head is basically an exercise in futility. There's no way you're going to understand that kind of thing. And so that for understanding, that's an entirely wrong approach.

And so for me I think when I think about World3 and Limits to Growth, their models were inherently simplifications and they're actually very upfront about that. They have—I think the world in their model has, it's completely mixed population so there's no geographic distribution. All of pollution is condensed into a single number. All of technology is a single number. Whatever. It's all fairly simple.

The interesting thing is actually, and even though I think there's a lot of people I think who kind of crap on the World3 model as it's kind of moved forward in the future, it actually wasn't so bad relative to the way in which the world—the shape of the world. That being said though, they kind of really wanted it to be either a spur to action or some sort of mental model to better understand the world. Now I think to a certain degree with their model they actually had—and they had certain—there was a certain ideology around the model as well that they're kind of putting in. But all models are like that kind of thing.

And so for me it really comes back to what are we trying to get out of this kind of model. So for example, SimCity is a vast oversimplification, possibly entirely biased oversimplification of how cities operate. That being said, first of all it actually got a lot of urban planners to get involved in that field. So great success. But in addition it also just teaches you about the inherent non-linearity and unexpectedness of complex systems. And I think even though the actual city stuff is entirely wrong, the fact that it can teach you about that and how certain choices will have big effects or small effects or unexpected consequences, how systems bite back—these are all the things that humans are really bad at having an intuition about and we just need more models about that. And so SimCity was really good for that kind of thing.

Now when it comes to really large complex systems that we want to just better understand and better model, for me I view it as—and actually going back to what you're talking about with evolution and biology—actually taking a more biological approach to these kinds of systems and having an almost tinkering iterative approach to modeling and approaching these systems, including actually technological systems that we ourselves have built. Because oftentimes when you look at AI systems or just really complex technologies that we have built as a society, they rival the complexity and non-linearity and weirdness of biological systems.

And so when it comes to biological systems, yeah, sometimes people talk about biohacking and trying to hack—that's often based on sort of an engineering mindset that really does not grapple with the true complexity of biology. The way biologists work is they try to understand a little bit of a system in its entirety or how different pieces work together and then slowly but surely build up a more complex and complete picture of the system. And I think that's the kind of thing we need to do. Even when it comes to whether it's natural systems like socio-technological systems of our own design, we need that actually—that more almost like a kind of a naturalist of old where they're just kind of collecting different species, trying to understand bits and pieces and slowly but surely building up a complete picture. That's the kind of mentality we actually need for understanding these large systems.

And so for example one would be—even in the same way that naturalists collected insects and bugs, actually collecting bugs in this case errors and glitches is actually a really good way of slowly but surely building a more complete picture of a system. Because oftentimes the model we have in our minds does not actually map onto the actual reality but we only notice it when something goes wrong. When there's kind of this weird gap—what was it like last summer when all the Microsoft systems across the planet went down and affected airlines and no one really realized that everything was interconnected until this kind of thing went wrong.

Or a much more trivial example—and I'm pretty sure the experts actually knew what was going on—but years ago I was living in Boston and in Brookline and there was—I think a water main broke and so they couldn't get water from the main reservoir, so they had to get it from backup reservoir. And they couldn't guarantee the quality of the water for several days. And so they issued some boil water order. And it was for all of the different various municipalities of Boston, so including Brookline where I was living, except for Cambridge, Massachusetts which was surrounded by municipalities that actually did have to boil their water. And it was not until that failure that I realized, "Oh, they actually get their water from a different source."

Now of course, the people working with the system, I assume, were well aware of where water was coming from for Cambridge, but I didn't learn that kind of thing until something went wrong. But often even the experts and the people who built these systems don't learn about how the system actually operates until we have these kinds of failures. And so for me, I think—I mean obviously failures, they're troublesome and worrying, but there's also a certain way in which they should be celebrated because they actually reveal something more about how the system operates and hopefully can allow us to bridge that gap between how we think it works and how it actually does work.

Failures as Portals of Discovery

Jim O'Shaughnessy: Yeah, I've long said that failures are portals of discovery and rather than get upset by them or cast aspersions on the people who are overseeing the thing that failed, I think it's quite the opposite. It offers us a huge opportunity to learn. And you know, our mantra here at my company is crawl, walk, run. And when you're crawling—I have six grandchildren, I'm very lucky and most of them were all here last week. And watching them go when they're first learning how to walk—what a cool thing to watch because you just start thinking about, can you imagine if we tried to impose on my one-year-old grandchild who's now walking, learning, he's cruising the furniture and everything.

If we took a modern corporate view and said, "No, you can't do it that way, you can't do it"—all of these prohibitions—we would never learn to walk because guess what, when you're learning how to walk, you fall down all the time. And then it's in the falling down that the child goes, "Oh, okay, if I do it that way, I go boom." And so my idea is we literally can't know everything. Never will, I don't think. I mean, that's just my speculation. I'd be willing to put a long bet on it, though. And you know, my heirs could win that bet a thousand years from now.

Have you read the book When We Cease to Understand the World?

Sam Arbesman: I have, yeah. Yeah. It's fantastic.

Jim O'Shaughnessy: I, as I was getting ready for you, I just kept going back to that book, right? Because there's that section in there where Heisenberg is freaking out because he's like—and it's fictional, it's a book, it's a work of fiction using actual people and what they discovered. I think it's a really novel format. But he's freaking out. He's like, "I don't know how I came up with this." He's looking at his notebook, right? And he goes, "But I think I now have a way to understand reality. Not just understand reality, but to manipulate it at its most basic level." And then, of course, it goes into the quantum consequences, right. He gets vertigo and he thinks, "I should maybe just throw my notebook in the sea here."

Is there a way, in your view, to—I don't even know how to phrase the question—to ring-fence new discoveries? Should we? Shan't we? What's your view on as—because we're getting into some pretty heady stuff now.

Progress, Prediction, and Intentionality

Sam Arbesman: Yeah, I mean, it's a tough question because I—I think in some cases, I assume you mean in terms of when we discover new things, should we kind of limit who knows about it or how we advance or slow down certain technological advance? Is that what you're kind of pointing towards?

Jim O'Shaughnessy: Yeah, yeah. No, not necessarily that. I tend to be more on the side of—I don't want a panopticon, people. I don't think that gets you to better knowledge at all, because back to cognitive diversity.

Sam Arbesman: Right, yeah.

Jim O'Shaughnessy: In fact, I'm vehemently opposed to a panopticon controlled by a few.

Sam Arbesman: Does not sound like a good world to live in. Yeah. It does not sound like a good world to live in.

Jim O'Shaughnessy: Awful world. I mean, you don't really need much imagination to see what living in that world looks like. And yet, and I'm also not a "Yeah, let her rip. Let's just do crazy things and damn the consequences." I'm looking for kind of the middle path.

Sam Arbesman: Oh, yeah. I mean, so the way I think—I mean, when people talk about progress or technological advancement, I mean, oftentimes the way I think about it—well, one thing I will say, is as much as I love thinking about the future and kind of what the future holds, and were talking about science fiction earlier and all these different kinds of things, I know I'm not that great at prediction. I can remember, I think it was when my first book came out, this is back in 2012. I was almost relieved that the book was coming out then because I knew for certain that right after—that was going to be one of the last books coming out in print. Everything was going to be ebooks and there was going to be no print books.

And of course, I was wildly wrong. And I can think of other examples where I was like, "Oh, yeah, this thing's going to be happening." Never happened or happened on entirely different timeline. And so for me, and I definitely want to have a certain sense of humility about predicting the future.

That being said, when I think about how people think about technological progress and advancements and new discoveries and things like that, it's not—it shouldn't just be, "Oh, these are kind of forces that we're just being buffeted by." And you kind of pour progress and technology on top of things and science moves forward. It's an accumulation of choices of, "Okay, what are the things we're learning? What are the kind of world—what are the advances we're making?"

And so for me, it's much more about, okay, taking a step back and saying, "What is the world that I want to live in? What is the world that we should be living in?" And then let's work backwards to try to make that much more likely. And so for me, when I see advances happening, I'm much more disappointed by not necessarily the speed or lack of speed, but more if people are thinking or not thinking about, or in this case, I'm disappointed by sort of the lack of forethought about the consequences of these kinds of things. If there's a lack of intentionality about the world that I want to live in and make that kind of thing a reality—that feels very disappointing.

And so, and you see this oftentimes when there's kind of a new technology or kind of a new buzzword that everyone's talking about, whether crypto or AI or whatever, and people are just trying to do things for the sake of doing things. And that's not all bad, but it's often a hallmark of just not really thinking about the world that you eventually want to live in.

And so for me, actually going back to the science fiction stuff, one of the reasons I love sci-fi is it can give me a suite of options of, "Okay, here are the different worlds that I might want to live in." And do I want to live in the Star Trek Federation world? Do I want to live in Ian Banks's Culture novels? Do I want to live in some dystopian future which I desperately do not want to live in? And let's figure out what makes those things more or less likely and then try to do that. And I think that's the kind of thing that I think about when people try to do scientific discoveries or make technological advancements.

Science Fiction's Role in Shaping Culture

Jim O'Shaughnessy: Yeah, again, we are very simpatico there as well. I think that the way I look at it is I think almost everything is downstream of culture and culture is made up of many disciplines, etc. But staying with science fiction for a bit, right. So I have been a bit dismayed by later science fiction because it was all just so dystopic and—and by the way, I don't hate it all. It's not science fiction, but it's a contemplation that I think is beautiful. Cormac McCarthy's The Road—really a downer, but beautiful. And it's not talking about innovation, it's talking about love, a father's love for his son and what that will do. But the unintended consequences if you bake in persistent pessimism into a society that—

I can't remember the author of the quote or the quip, but it was like, "I don't care who writes a country's laws if you let me write its songs and stories." And basically we actually started a publishing company because we didn't want—we didn't want pessimism to win the day in all science fiction. And we just have coming out August 1st, a book called White Mirror, which is more optimistic look at what the future—

Sam Arbesman: Neal Stephenson, the science fiction writer, he actually did—I think in partnership with Arizona State University a number of years back they did a project called Project Hieroglyph which was exactly that, which was saying let's try to actually envision these positive visions of the future. And they had a whole collection of short stories and they worked with scientists and engineers to kind of get everyone to be as creative and imaginative as possible.

I think one of the—I wouldn't say problems, but one of the complexities there is oftentimes the most interesting stories are when things are going wrong or where there's tension. And so you kind of have to—and so for example, Ian Banks's Culture novels, oftentimes they don't happen in the Culture where everyone is in this post-scarcity society and everything's perfect. It's often at the edges where they're interacting with other weirder civilizations and so that—and so there's always that tension there. But I definitely think we need more visions of—right—of what the world can be like when things go right.

Jim O'Shaughnessy: Yeah. And but temper it with a kind of what I would call a rational optimism. A realistic optimism.

Sam Arbesman: Yes.

Jim O'Shaughnessy: Because the future is not problem-free.

Sam Arbesman: No future is, right, and people will still be people.

Human Operating System and Its "Bugs"

Jim O'Shaughnessy: And that was the thing. It's almost like you're looking at my notes over my shoulder because I was going to say people—I call it human OS. I stole that from Brian Roemmele. I love it. Human operating system doesn't change very much. And in my old life as an asset manager, I basically said that the only sustainable edge is to arbitrage human nature and continue to be able to do it because markets change millisecond by millisecond. Human nature doesn't budge—millennia by millennia.

But am I wrong, right? I think about the what I call the bugs of human OS, right. The illusion of control. Everyone wants certainty, which we never can have. You know, we're probabilistic. We live in a probabilistic universe but many are deterministic thinkers and ouch, that's a mismatch and hilarity or tragedy often ensue.

Do you think that you could build a system where you kind of make human behavior a constant? And by that I mean tribalism, you know, status hunting, you know, inherent biases, you know, all that big mix. They just like—we have libraries filled with books of well-designed reproducible studies that like yeah, confirmation bias is a—and it just keeps persisting, illusion of control, the need for certainty. Can we hold that? What do you think? Is that something you can hold constant or are we ultimately going to see changes in basic human nature? Are we going to patch those bugs?

Sam Arbesman: I'm not sure if we're going to fully patch them. I think if you look at human history, I mean it's not single direction, there's not the Whig view of history is not really correct. That being said, there have been improvements at the cultural level and it's less about changing human nature entirely and more about kind of managing it. I think actually this goes back to some of what we were talking about earlier with the biological and kind of tinkering understanding.

I think it's this—that kind of tinkering approach with human nature is probably the way to think about it where it's less about, "Oh, we're going to change people and they're not going to be subject to these biases or these cognitive quirks" and more about how do we reduce the effects of those kinds of things and make them—and either make them less worrisome or simply just make people more aware of these kinds of things.

And so when we look at tribalism and things like that, we have actually over human history kind of expanded our sphere of concern in terms of who we kind of view as part of us versus the other. And so I think Robert Wright talks about this kind of stuff, I think in Nonzero and he actually has a book I think related to—I forget exactly the title. Maybe something related like The Evolution of God, but kind of also using this as looking at the evolution of religion as a history of kind of expanding spheres of concern. And so I think we have found ways of managing some of these kinds of things.

That being said, yeah, there are these invariants—if you read ancient wisdom literature, whether it's Stoicism or the book Ecclesiastes, the ideas in there, they are still extremely relevant because we are, we're still people. And so, and actually I have, sorry, I have a lot of lists on my personal website where I kind of collect various different things. And one of them is a canon of modern wisdom literature because there's kind of—there's a lot of books that sort of rhyme with things around Ecclesiastes but are kind of steeped in more kind of modern wisdom kind of approaches or more scientific approaches and things like that. And so, and but at the same time they still often have the same messages of these books and these texts from thousands of years ago.

So I think we might be able to change things at the margins and we have a certain set of good ideas that allow us to kind of tame the downside of human nature. So certainly I think a lot of the advances are around what we've done for society. Since the Enlightenment, these have been unbelievable in terms of finding ways to allow people who are incredibly different to operate together and maximize human flourishing. That's great, and that's the end goal. But we also have to recognize that humans are humans and these things are always going to be relevant.

So for me, it's less about changing humans. And yeah, maybe there's some genetic things we can do at that point. I'm not really sure we are humans anymore if we're kind of changing some of these things. For me, I kind of have this deeply sort of humanistic approach to the world where I like being human. I like contending with sort of the weirdness and richness of humanity, but still trying to make me kind of the best version of myself. I think if we change ourselves too much, we're not the best versions of ourselves. We're some other version, which could be good, could be interesting. Maybe that's an experiment worth running. I'm kind of arguing against myself right now, but I still feel deeply that humanity—and yeah, for all of its goodness and badness and all the weirdness of the human OS, there's something worth preserving, taming, but also really leaning into. And so, yeah, I just want to kind of be more aware of those deeply human features and recognize them and rejoice in them, but also kind of make us the best versions of those things.

Jim O'Shaughnessy: Yeah, yeah, I agree. And that leads me right into something you wrote where you said that Ada Palmer credits Francis Bacon with inventing the very idea of progress. And I find that interesting because when we decided to give these fellowships, I was inspired by Francis Bacon and his story of the Atlantean journal where they send the scientists out to collect all the data and bring it back and then contrast it with the Judaic tradition, which is more linear time. And you mentioned Ecclesiastes that, you know, the most famous line from that is, "There's nothing new under the sun."

And the end of your essay, you say the race is not always won by the swift. And I smile because I thought of the Damon Runyon quote, which is, "The race is not always won by the swift, nor the battle by the strong, but that's the way to bet."

So let's talk about that a little bit, because you mentioned just a moment ago these forgotten innovations. Can you give me some examples of some that you've stumbled across and like, "Wow, what if we resurrected this one?" What would—the modern view on that. I'm fascinated by that because I like you think that you can gain a ton of wisdom by reading ancient things like Heraclitus was the first to basically be—you know, "The same man cannot step in the same river twice" and he was pre-Socrates, right.

So I definitely agree that that kind of Lindy look at things that continue to persist over generation after generation. But I'm less well informed on abandoned innovations. Do you have some examples of some you come across?

Technological History and Forgotten Innovations

Sam Arbesman: So abandoned innovations, I mean, so that—I'm not sure I know as many of those. I mean one of the things and related to what you're saying though, I think a lot about just technological history more broadly. And actually one of the interesting things I see in the tech world, especially kind of in the Silicon Valley tech world, is a certain amount of historical ignorance around technological advancements. Oftentimes proudly ignorant. Which for me strikes me as very concerning because for me I think there is something to be gained from understanding this kind of path dependence and seeing the reasons behind why certain things were invented and discarded, sometimes rediscovered.

And so for me this is not quite ancient technology, but one of the things I think about—and so in my new book, The Magic of Code, I talk a lot about technological history and that's—well, I would say there's two aspects. One is technological advancement is changing so quickly right now that anything I write that's kind of up to the minute will still be out of date almost instantly. And so looking to history is less likely to change. But also I think technological history is deeply relevant for how we kind of think about the world because especially when it comes to computing, a lot of the things that we think are new, a lot of the discussions we're having that we think are new or the advances that we're making—a lot of the time those ideas were almost—they were around almost within the inception of the modern digital computer.

People from the moment people made digital computers, they were thinking around things around artificial intelligence and simulation and certain ideas around biology and artificial life. These things are not new. And so trying to understand what people were thinking about and then recast it with, "Okay, we might have some new ideas, we might just have better computational power. Can we actually revisit those kinds of things and rediscover them?"

So for example, right now when we're talking about certain ideas around AI and unanticipated consequences or certain things around alignment or work and meaning—these are not new topics. You look at Norbert Wiener, the developer of cybernetics, and he had—there's this great—I think it's a collection of his, a modified collection of some speeches he gave called God and Golem, Inc. And in it he talks about exactly all those topics. And the thing is, it wasn't even just in this weird, esoteric area of cybernetics, if you also look at the 1960s, the original Star Trek, there was an episode called "The Ultimate Computer" that dealt with basically all of these issues. I watched it somewhat recently and I was just blown away by how they anticipated the entire conversation that we're having now.

So these kinds of things—looking at these kinds of questions and both the questions we're having, the technologies that people have used, how people have engaged or interacted with technology, I think these kinds of things really enrich how we think about history.

Now when it comes to going back to your original question of innovations that we've forgotten, certainly within computing in the early days, there was a lot of really interesting discussion around not using computers as fun, whiz-bang gadgets, but viewing them more as these are tools to help us—better versions of ourselves, think better, educate our children. And looking to how people thought about those kinds of things I think is really useful because not necessarily the algorithms are going to be exactly what we want to use, but the innovation as sort of the vibe that they were kind of giving off. I think that's something that we need to reinvigorate. And there are people who are thinking about the future of programming and future of coding—they get it. They talk a lot about kind of some of these earlier days, but by and large, I feel like in certain aspects of the Silicon Valley world, we've kind of forgotten that.

And actually, it reminds me, there's the—did you ever watch the TV show Halt and Catch Fire? Do you know this? Of course. It's amazing. And so, right. And so in the very first episode, so at that point, I think it's 1980, one of the characters says, "The computer's not the thing, it's the thing that gets you to the thing."

Jim O'Shaughnessy: Yeah.

Sam Arbesman: And that's the whole point of computing. And we've kind of forgotten that. And I feel like looking back to these earlier days of how they thought about computers as the thing that gets you to the thing and the way in which they built these things. And even just whether it's looking at old computer magazines of the kinds of software people were playing with and things people were trying, I just find that incredibly exciting and invigorating. And so yeah, there's a lot there to be discovered.

And for me it's almost like I want—I almost want there just to be awards or competitions for people to just go searching in the stacks of old technologies and find weird things and the history of software, things that people tried that we kind of abandoned for certain reasons and maybe should be reexamined. I would love to see things like that.

Jim O'Shaughnessy: Yeah, me too. And I've always been really big on context, right. If you don't have context, I think Cicero said something like, "If you don't know what happened before you were born, you will remain forever a child." And the context-free—and you attribute it to some in Silicon Valley, but it's not just Silicon Valley, it's everywhere, right.

I had a long conversation with a writer who's a young person—millennial, not Alpha or Zoomer, but his primary worry was that the obsession and addiction to the new—social media, for example—really limited and really had an effect, a very bad effect on people willing to—for example, I love the series by Will and Ariel Durant, The Story of Civilization. I don't know too many people. I've got a young guy who works for me who's reading the entire, however many volume set of it, but I—and maybe this is just me being a fuddy-duddy, but there is so much in there that is relevant today.

And to your point about they were talking about this years and years ago. We're developing an on-prem AI. And so I was going through the history of it and the Economist magazine, which was a very widely popular, is a very widely popular—they were writing about AI back in the late 80s and 90s, right. And I'm reading this and I'm like, "This sounds like it could be written today." Oh yeah, because they were hand-wringing. "Oh, is it gonna—you know, they're gonna take the job of the white collar workers because you don't need accountants anymore when it can be shrink-wrapped and put on a shelf." Now there's a problem. They didn't anticipate that we wouldn't be going and buying software and stories again.

But I—the idea of context, I love that. And maybe—maybe what? I was just thinking, you're aware of our fellowship program?

Sam Arbesman: Yeah.

Jim O'Shaughnessy: Okay. So would you be willing to work with me and we'll make a special fellowship for next year when we open them for 2026 to have—get—find a fellow, you know, fund the person, not the project where we find a fellow that literally does this task.

Sam Arbesman: Ooh, this is interesting. That's a fun idea. That's kind of wild. I love this idea.

Jim O'Shaughnessy: Yeah. So I mean, I'm just thinking of this now—what a great idea I'm getting from you. If you wouldn't mind giving us some input, we could design what we're looking for and then go find that person.

Sam Arbesman: Oh, that would be interesting. Oh yeah. To find—yeah, to find the person who kind of—and you kind of want—right. You want someone who's knowledgeable, may not too knowledgeable, so they can be kind of excited by finding some of these new things and just definitely want—

Jim O'Shaughnessy: A tinkerer and generalist. Not I, you know.

Sam Arbesman: Oh yeah, there's definitely something there, right? To find—yeah. Because I think, I mean, and the truth is even if they find something that is not necessarily new, where someone will be like, "Oh yeah, we've known about this kind of topic for a while." There's something to be said for really and fundamentally the import-export of ideas like that. Because even if it's well known in this one little area, if they're not doing anything with it or they're not actually making it relevant to this other field, then it doesn't matter. And right. So you need this process of rediscovery or technological archaeology combined with this kind of import-export process and making it actually relevant to the modern day. Yeah. Oh, that is super exciting.

Jim O'Shaughnessy: Cool. All right, so I'm going to consider that a yes. You're going to be hearing from me later on as we gear up for the 2026. Because I just had that idea listening to you and I love that idea. So thank you.

Sam Arbesman: My pleasure.

The Importance of Humility in Knowledge

Jim O'Shaughnessy: Talk a little bit about humility. And I have a friend who calls it pre-fall and post-fall.

Sam Arbesman: Okay.

Jim O'Shaughnessy: And you want to deal with post-fall people because they've had the shit kicked out of them so many times by the world. If they're still in the game, they have a certain level of humility that they did not have pre-fall. I saw it happen in my own life, right. Pre-fall, I was a proselytizer and "this is the way" and "I will tell you and tell you all." And then—and I'm like, "Yeah, I made a lot of mistakes." And in fact, I wrote a piece called "Mistakes Were Made, and Yes, by Me." Because there's this kind of idea that is prevalent in not only investing, but in business in general, in academia as well. And it's this fear of, or this desire to, you know, always appear to be right. And I just think that's toxic, right.

If I could get people to just utter one phrase more often and more sincerely, it would be to answer, "I don't know," right. And because that's the spring of curiosity. That's what gets us—when I was in asset management, they would ask me a question. I'd say, "You know, I don't know, but I hope I can find out. If I do find out, I will give you the answer." Why is that just part of human OS, do you think?

Sam Arbesman: Yeah, And I would have to say so. I mean it—and it's maybe a certain amount of insecurity, wanting to kind of show that you know everything, but it's—and or maybe it's just not knowing how exciting it can be to actually not know things. Because for me, I mean, one of—and you were mentioning about running to the encyclopedia and looking things up. And for me, a family dinner is a success when we have gone and looked into a book or when my kids asked me a question, I was like, "I don't know." And then we go try to figure it out together. That—that's an unbelievable feeling. And I think that—yeah. And maybe people don't necessarily just realize the true joy of that.

Now, of course, the "I don't know" is somewhat different than the "I was very certain of something and now I might be wrong." And so that kind of thing, to a certain degree, it's almost like that is the—it's ultimately a scientific mindset. Not science in terms of thinking about very specific scientific areas, but in terms of how science is actually done.

And so I was reminded that the—a professor of mine from graduate school, he actually told me this story where he was lecturing about some topic and I think he went in on Tuesday, lectured about some topic, and then the next day he actually read a paper that invalidated everything he had taught. And so he came in on a Thursday or whenever it was next and he said, "Remember what I taught you, it's wrong, and if that bothers you, you need to get out of science."

And I think that kind of idea that things are constantly in a draft form that can be really rewarding and exciting, but it's very hard certainly outside of science. And to be honest, even inside science, if you are the one having your own discoveries being overturned, a lot of scientists fight tooth and nail to avoid that kind of thing. And so it's very easy to say in the abstract when it comes to—so I think when it comes to knowledge overall and being overturned and being wrong and things like that, scientists get that when it comes to their own science, that's a whole different matter. And they're still very human going back to human OS and science is being done by humans, and so they're going to be very human when it comes to having things being contradicted.

But yeah, so I think we just need more of that kind of mindset where it's good to work at the frontier of knowledge where, you know the least, the most exciting things are happening, but things are constantly being overturned. That's great to be told that some bit of information you have in your mind is actually somehow—it was just half-remembered and you're actually wrong. That should be something worth celebrating and saying, "Oh, now I get to learn more about that kind of thing." And then related to that is the whole, "I don't know"—you want to learn more things.

And so I think maybe it's cultivating that kind of—yeah, the scientific mindset, a certain amount of curiosity, as well as just recognizing that we have clawed back a certain—a huge amount of ignorance about the world. But there's still so much ignorance we have. And that's fine. That's okay. I mean, we've done really well as a species, but there's still a lot left to learn. And yeah, it's—that's just fine. Not only is it fine, it's great.

Treating Beliefs as Hypotheses

Jim O'Shaughnessy: Yeah. One of the things that I started doing about 10 or 15 years ago was I started treating my beliefs and things I thought I knew as just hypotheses, right. And it helped enormously because one of the—it made it so much easier for me to say, "You know, this model I had of topic A worked really well and then started to really disintegrate." And it was when I went hunting as to why that happened. Wow. There was all this new research, as you say in your story, that very persuasively negated what I thought I knew. And by treating it like a thesis or a hypothesis, right? I didn't attach it to me. I didn't attach it to Jim.

And you know, it's like the people who say that—you know, if you go on social media, right, everyone's saying "a hill I'll die on is," right. And I always joke that I take the general George S. Patton approach that I'd much rather have the other poor dumb bastard die on his hill. I'd rather change my mind and I'd rather delete an old belief that is no longer serving me.

And yet it's also that—then what? Now we come back to human OS, right? The desire for a coherent personality, right? A coherence. People start freaking out. In my opinion, this is just me speculating, I'm probably wrong. But when they have to give up a cherished belief, something that they've invested themselves and their own sense of self into, it's like a mini death, right? And they don't want to give it up because they're like, "I'll decohere or people will think, you know, I'm flip-floppy. Or I just don't have solid pillars to my various beliefs." Man. All these pillars are built on sand in the first, right? When you think that way, it just makes it a lot easier to delete old beliefs. Use the newer model. It's just an ongoing thing, right?

And you know, you address this in The Half-Life of Facts—it's getting shorter. It's like, how do—we were talking about textbooks the other day, right? I love physical books. I love the artifact of a physical book in my hand. But if we ever got into textbook publishing at Infinite Books, my point of view was they gotta be electronic books because the minute you print them, they're out of date and not useful. What are your thoughts?

The Half-Life of Facts and Updating Knowledge

Sam Arbesman: Yeah, I mean, certainly when you're saying about just being able to update things and kind of delete these old beliefs and kind of change things and modify it. And for me, I view—someone who is willing to say, "These are the things I tried or thought were correct, they were wrong." For me, those are the people I find much more appealing than the ones who kind of stick with their ideas or their ideas have never changed. That doesn't feel as interesting to me. I'd much rather have people who have slowly but surely kind of asymptotically approached the truth through updating things and changing things.

So yeah, when it comes to—yeah, textbooks, right. They've obviously changed over time. I definitely think looking at old textbooks, old print textbooks is great as an artifact. They're fascinating. I agree. And but in terms of—right, if you think about, "Okay, what would I include in a textbook if it were—if it had to be in print and not change," it would be much more about how to constantly learn rather than any sort of facts themselves, which, I mean, fundamentally, that's really what science is. Science is not a body of facts. It's really just a rigorous means of querying the world. And so you're right. So then suddenly every textbook is just "here's how to actually go out and learn new things and test things." When it comes to the actual knowledge, right, it's always going to be in flux.

And sometimes, by and large, it's often the things that are newest, those are the things that are most subject to change. That being said, there are times when things that we think are more fundamental can actually shift as well.

And so we were talking about my grandfather earlier and so he was a dentist and when he was in dental school, he learned the wrong number of human chromosomes and he learned 48 instead of 46 because it turned out there was this period of, I think, several decades where we had microscopes that were good enough to see chromosomes but maybe not good enough to count them accurately. And the wrong number just made it into the basic knowledge. And I think just the fact that something like that, which we kind of take for granted is something that should be certain, that it could change means we really need to have a great deal of humility around these kinds of things.

And so, and I think I would say medicine is probably the best in terms of understanding these kinds of things. So they'll have, whether it's certain ideas around continuing medical education or online—I don't know if they're textbooks, but kind of tools where you can constantly find the most up-to-date knowledge and they kind of recognize this. And medical students are taught in medical school a decent fraction of what you learn is going to be out of date or wrong within a few years of graduation.

I think my father told me some story. So he's a retired dermatologist and he told me that I think he had an exam where it was a multiple choice exam where the same exam was given, where there was one question, same choices. One year it was one choice that was correct and the next year it was a different choice. But nothing about the actual test itself had changed. It was just we learned new things about the world. And so I think medicine gets it maybe a little bit more than other domains because things are changing so quickly and lives are on the line. But I definitely think that kind of approach should be exported to all fields of knowledge.

People Closest to Truth: High-Stakes Decision Makers

Jim O'Shaughnessy: Yeah, the—I don't even know what to call him. I guess I'll call him a philosopher. Jed McKenna says that the people he finds most interesting are those whose choices and decisions cause real consequences. And he maintains they are closest to truth and reality because the cost of being wrong—you bring up doctors—is death in certain circumstances. But then he brings in two others that I find very interesting. So he's got doctors in there. He specifically calls out emergency room doctors.

Sam Arbesman: Okay.

Jim O'Shaughnessy: And but then he's got special operators in the armed forces like the SEALs and Delta and those people and interestingly enough, traders, people who trade in financial markets because one wrong trade and your 10-year history of making a ton of money and everything and you become a dead player, right. And one wrong move when you're on reconnaissance as a SEAL and your brothers who are serving with you die and obviously doctors. How would we transport that mentality over to far less consequential careers or areas where the consequences of being wrong are far less dire?

Sam Arbesman: Yeah, I mean when I think about those areas, I mean there's obviously accountability in terms of, "Okay, you make a wrong choice or do something wrong or that's out of date, it can have very real consequences." But there's also this very clear feedback of when you do things—you kind of learn based on the things that you do. And I think when it comes to other domains that the accountability and feedback are far—they're far more attenuated or we have feedback, but it's less related to the decisions or kind of the knowledge that you have.

So for example, if you are, let's say in kind of as a scientist or you're just working in the realm of research if there are worse consequences for you—overturning or kind of, or being willing to kind of overturn something that you thought was correct. If you say "this is what I thought was correct but now it's actually wrong or I made a mistake." And if admission of a mistake is punished more than the mistake itself, I feel like that's—there's a big issue there. And so I think we need to find ways of incentivizing, right, that admission of mistake where we kind of valorize people who say, "Oh, I thought it was this and now I change my mind." And so I'm not exactly sure how to do that.

But I think that incentivizing the admission of mistakes is going to be the key aspect here because I think, because otherwise people will fight tooth and nail for things that—there are the things that they hold on to. I mean this was—there's a maxim from Max Planck where it's "science proceeds one funeral at a time."

Jim O'Shaughnessy: And—

Sam Arbesman: Yeah. And people have actually tested that and it sounds like it's not quite true, but there does seem—but it feels correct in the sense that, right, we only are moving forward not because people are changing their minds, but because the people who are unwilling to change their minds finally die or leave the scene. And so there are many instances where people do actually change their minds, but I think we need to praise that more and actually incentivize and valorize that kind of thing and say these—this is the hallmark of success where people have actually had certain long-held beliefs and now they've changed them or they're willing to admit a mistake. That's the kind of—I think that's the key in making the shift.

Jim O'Shaughnessy: Yeah, I agree. And in my old business of asset management, one of the things that—protocols that we put in place was I told all of our traders and all of our people who actually touched the portfolio, making a mistake or an error will not get you fired. Trying to cover it up and not telling us about it will get you fired every single time. So basically my goal obviously was that those things be brought to our attention immediately so that we can fix them. And it was really interesting. The young trader came in after we kind of had that meeting and he's like "that just—that simple thing just makes me feel so much better because you know, it's that kind of human nature."

And I'm thinking we're talking about science and one funeral at a time and all that. But back to human OS, right? You know the story of David Bohm when he published his Hidden Variables paper and Oppenheimer, who had been his mentor, was told by the US Government, "that guy's a red, he's a communist. We don't want him rising in the hierarchy." And so there's records of Oppenheimer going to his colleagues and basically saying "if we can't disprove David, we must ignore him." And I wrote a little piece on it, basically saying, "Yeah, it's the plot for the movie Mean Girls, right? You can't sit with us anymore, David."

Sam Arbesman: Oh yeah.

Jim O'Shaughnessy: But you know the kind of—the whole idea behind the citadel of science and protecting their turf, right? You get into these turf wars, it's like the fact that much of science is still clinging on to materialism, I just kind of find baffling given all of the things that have been discovered, etc., that would say they might want to at least change their thinking on some of those things.

But also the unwillingness to—if you're applying for a research grant, it's highly unlikely that you are either going to write your grant saying "we think that the hypothesis will be a null set," right? Basically the funders, wherever they are in the government, or they don't want to hear that to the point where we—we are thinking of when we can get around to it. We've got a pretty large list of things we're trying to accomplish. But how cool would it be to have an AI just publish null sets, just literally run through a ton of experiments and then publish them to a public database, right? Because learning via negativa, if you're a mystery fan—Sherlock Holmes, Hounds of Baskerville, right? How did he know about who it was? The dog didn't bark. And he didn't bark because he knew the person, right?

And yet when I'll speculate on this—the idea of publishing these huge data sets of null set, people kind of look at me strangely. What do you think of that idea?

Publishing Negative Results and Incentivizing Diverse Scientific Activities

Sam Arbesman: I mean, so I know there's been at least one scientific journal that I think tried to do that—a journal of negative results or whatever it was, and I don't actually know how successful it was. My sense is probably not that successful because right now we incentivize certain kinds of things in science. And I mean, the way I kind of view science is there's all the activities that are valuable for science, and then there's the subset of things that get you tenure—the things that are kind of valued by scientific academia.

And we need more ways of valuing these other kinds of things, whether it's doing kind of weird interdisciplinary work or just helping other people without necessarily publishing your own papers or publishing negative results or doing research that maybe has very high variance in terms of its outcome—there's a decent chance it'll not succeed. There's all these things that move science forward, but we often don't necessarily know how to incentivize them.

And so there was actually this paper, this is a number of years ago now, where they looked at, I think it was in the field of immunology. It looked like 50 years of research or whatever it was, and said, "Who are the researchers that were kind of the most highly cited?" But then they also looked at the researchers who were acknowledged at the end of papers, and they found that there was this group of scientists that were kind of—they had a mediocre number of citations, but they were actually very highly acknowledged at the end of papers. And when those people died, the productivity of everyone around them dropped. And so it showed that these people were actually really important for science. And so they were doing something that was really important, whether it's giving out ideas or just kind of being helpful.

And I don't think the solution is to suddenly say, "Oh, now the number of times you're in an acknowledgment at the end of a paper counts towards tenure," because I think then that'll be gamed. But we just need to recognize there's so many more activities for moving kind of the endeavor of knowledge growth forward and including—yeah, exactly what you're saying—these negative results, finding all the ways that we don't—that things don't work, because oftentimes right when a lab runs an experiment, it doesn't succeed. Throw it in a drawer and kind of move on. And then other people do the same thing and they recapitulate it. And so there's a huge amount of duplicated energy and effort towards these things that we know don't work, but they're not kind of in the public record. And so we—yeah, we need so many more ways of incentivizing publication of negative results or helping or all these other activities that will actually move science and just knowledge forward.

Understanding Complex Systems Beyond Human Comprehension

Jim O'Shaughnessy: Yeah. And that as I was listening to you, I was thinking about your view on, you know, things have become so intricate and complex that they're getting harder and harder for we humans to truly understand. And if key infrastructure is genuinely beyond human comprehension, what do we do? I mean, I know that's kind of a loaded question, but you're the right guy to ask it. I mean, how could—how do we do reliable stewardship in practice if this is—if this is correct?

Sam Arbesman: So I think part of it comes down to just having a certain amount of awareness of the world that we're in. And I think—and so the computer scientist Danny Hillis, he's written about how we've moved from the enlightenment when we kind of apply our rationality to understand the world around us to the entanglement where everything's so hopelessly interconnected we can no longer fully understand it. And that's clearly the world that we're living in. But for many people, we remain in ignorance and sometimes willful ignorance of this kind of thing.

And so a number of years ago, when the Apple Watch first came out, there was this great quote I found in—it was a Wall Street Journal article. I think it was the style section about whether or not people are still going to wear mechanical watches. And of course people are still going to wear mechanical watches. But this one guy was like, "Oh, yeah, of course I want to wear a mechanical watch. I think of how sophisticated and intricate it is, as opposed to a smart watch, which is just a chip." And of course, "just a chip"—these things are orders of magnitude more complex, but we've been shielded from it. And I think that's partly the problem.

And so when we think—and because we're shielded from it, and we don't necessarily think about these kinds of things, when things go wrong or when we're confronted by these—these kinds of complex and overcomplicated situations where we don't fully understand, then things are going wrong and we have cascading failures, we're going to be blindsided and we're going to be really distraught.

And so for me, when I kind of think about the right way of approaching these technologies, it's somewhere in between. There's kind of two extremes. When you look at technologies and systems we can't fully understand, there's either kind of awe in the face of these things of "oh my God, these AI systems are beautiful or kind of the mind of God" or abject fear, "oh my God, self-driving cars are going to kill us or AI is going to kill us." And some—and having a certain amount—maybe having a certain amount of concern is useful, but if you have both of these, if you have one or two of these extremes.

The problem with either of these extremes is going back to what I was saying in terms of not being aware of how these systems work. They also cut off questioning. If you think these systems are amazing, you're never going to learn from them. And if you think these things are fearful, you're going to be so blindsided by that fear you can't interact with them productively.

And so going back to humility, humility is really the proper way to think about this, which is to say, "Okay, we might never fully understand these systems, but there is a great deal of understanding between complete and total understanding and complete and utter ignorance." And we can work there and we can actually slowly but surely try to understand these things, whether or not through biological thinking or kind of iterative tinkering or slowly but surely understanding different bits and pieces of a system and gaining a kind of humble iterative approach to understanding the world. I think that's the kind of approach that we need.

And actually in going back to, in terms of historical wisdom. So prior to the Enlightenment, there was this understanding that we could never—that we might not actually fully understand things completely. So if you look at—so the philosopher, physician and Rabbi Moses Maimonides, in one of his books, The Guide for the Perplexed, he talked—he actually talks about how there are things we will never fully understand and they're kind of only in the mind of God or whatever it is. And actually he actually gives a list which is actually kind of interesting because I think one of them is the number of stars in the sky. And actually we actually do know the number of stars visible to the naked eye. We actually know that. I think he was—whether or not it's gonna be—the number is gonna be even or odd. I think we actually know it's even. I don't remember exactly. Don't hold me to that.

That being said, there was this understanding that there were things that we might never fully grasp. And I don't necessarily want to say that therefore just because there are things we might never fully grasp that therefore, we should not try to continue to understand the world and understand the systems that are around us that we ourselves have built. We should definitely do that. But if we bump up against situations where we might not fully understand them, that's okay. And I think having that productive humility is really the path forward in terms of thinking about these kinds of things.

The Continuum of Understanding

Jim O'Shaughnessy: Yeah. And that brings me back to, as I was listening to you, human OS again, right. This deterministic pattern. Yes. No. 0, 100, black, white, right. And I guess we can blame Aristotle for that because, you know, he influenced quite a few number of thinkers. But that's wrong. It's—that is not the way the world works at all. It's—I, to me, it just seems kind of the height of arrogance and kind of being stupid to think that we could truly 100% understand anything. And likewise zero understanding. I guess there might be something there, but it's always a continuum, isn't it? And—

Sam Arbesman: No, it's—it's always there. And it's also—and I think one of the other things is not only should we have that understanding that it's a continuum of understanding, but there's something to be said for even if we think we understand something, preserving a little bit of that doubt or recognizing that we might be wrong or there are other ways of understanding the world.

And actually so—and there's the whole idea of "history is written by the winners" or whatever. So if you look at the ancient Talmud, one of the interesting things about it is it actually preserves the debates and the opinions of the ones that lost, the ones we don't include. That is actually part of the discussion. And I think having that kind of intellectual humility of, "Okay, we might be wrong. Here's a number of other different opinions. Here's how people arrived at these kinds of things." When we think about science, I mean, obviously science, we preserve a lot of this knowledge, but oftentimes we don't necessarily think about all of these other paths not taken.

And this actually goes back to the historical sense, which is really understanding how we got to where we are. What are the things that people tried, how do we think about these things that I think also can help illuminate how to better understand the world around us. We're never going to fully get there, but we—sure as hell must keep on trying. And the more we have on which to grow and as long as we have that context and that foundation. I think we'll be better positioned to actually understand that.

Building Better Incentive Systems

Jim O'Shaughnessy: Yeah, and the challenge there is how do we build a system of incentives for people in, and you know, choose your discipline. It doesn't have to be just science, it doesn't have to be business—but how do we build a system of incentives that reinforces as opposed to, frankly, I think we're still operating under the old system of "you were wrong, you know, that's going to cost you and you're fired."

And you know, one of the things that we did with stock market and investment research is we kept an invest—what we called an investment graveyard, which were all of the ideas we tried and thought would work that didn't work because again, you can really learn a lot from that. I'm just curious as to how would you kind of design that?

You know, and I'm looking down at my notes and in The Magic of Code, you frame programming as modern sorcery. I love that. And prompting is spell casting. I love, love. And so can we cast some spells? Can we do sorcery to get people incentivized to start thinking this way?

Sam Arbesman: Yeah, I mean, so at least in the scientific realm, I mean I was mentioning before, how there's the space of things that are relevant for science—valuable for moving science forward and then there's only the small number of things that academia actually incentivizes. One of the things I've been actually thinking about is what are kind of new types of scientific organizational structures that we need. And so when I think about the space of organizational structures in science and in terms of things that allow you to do research, and it's great that we have universities and corporate industry labs and sometimes even startups that are doing some fundamental research, but those are just three points in some weird high-dimensional space of potential institutions. And we actually need to examine this entire space.

And so, and luckily over the past three, five years or so, there's actually been a lot of really interesting innovation here. And people have tried to make new structures and new types of organizations that are more interdisciplinary. They fund people versus projects. Sometimes they're funding just projects, sometimes they're distributed, sometimes they're doing weird kind of other kinds of things. But to be honest, I mean, I'm kind of agnostic as to which is the—at least within the scientific realm—which is the one that's actually going to win out in terms of the structure because people always talk about this. There's a Cambrian explosion of new types of scientific institutions. The flip side of any sort of Cambrian explosion in the evolutionary biology realm is there's going to be a probably big, pretty big extinction event.

And so a lot of these will not—they will not survive. And that is unfortunate for these institutions but that is the kind of the process of learning. And so I actually think, and to be honest, I don't really have the answer of what is the way to incentivize these kinds of things. I just want there to be more types of institutions organizations where people are able to do more different kinds of things. Whether it's doing weirder kinds of science, other kinds of places where they can admit mistakes and failure or other kinds of—and yeah, maybe some of these won't win out but presumably another type of institution that has this kind of healthier—this healthier relationship to knowledge seeking or whatever it is will actually be quite successful.

And so for me I just want a thousand flowers to bloom, million flowers to bloom, whatever it is because we're going to eventually hit upon new points in this high-dimensional space. And so yeah, I don't know, I just want more people to try things and experiment.

Jim O'Shaughnessy: Yeah and we—I completely share that point of view. You—you can't tell until you just try a bunch of different things. And you've just got to be—get yourself and your team really comfortable with the idea that—the high extinction rate. If we're having a Cambrian-like explosion, you know, a lot of these things are going to go extinct and that's okay, right? You—in my view it's like having this "No, we are going to achieve X," right. And whatever X happens to be and it's out here and "this is the path that we are going to"—do not prescribe a single path that you are going to take there because you have over-indexed on one thing and your likelihood of failure just soars, right.

Going at it from a thousand flowers blooming—that you're going to be much more successful.

Sam Arbesman: And I think this is one of the things that Silicon Valley has done really well as a culture is normalizing that kind of failure. You make a startup, it fails, you move on. There—it's not a scarlet letter or you're not punished for all time. You learn from it and then you take that experience to actually do something better the next time and so I think maybe aspects of that culture and that kind of thinking to the broader society could actually be valuable.

Learning from Failure and Broadening Perspectives

Jim O'Shaughnessy: Yeah. And the—you know, the—basically some of the things that I've been able to accomplish came out of a big failure, right. I—oh, I think we should do it this way. And it's just like, "Mayday, mayday," as I'm bringing the plane into the drink. But the—what you learn from those failures really help. And I just—I'm perplexed by what seems to be the majority view of really being frightened to embrace that kind of approach to the world, again, not just in science, but more broadly.

Sam Arbesman: Yeah. And maybe part of it is also just our current cultural moment where our identity is so tightly wrapped up in our professional success. And if you have that, and also all of your friends and your colleagues, everyone you interact with is also in that world, not necessarily even just in the same world of wrapped up in professional success, but really the same industry that you are in, that when you have a failure, it can really shake your entire community and your entire kind of social network in terms of how you interact with everyone else.

If, though work is work and it can be meaningful and can be fulfilling and it should be, but you also have a broader group of friends and family that really are just entirely orthogonally connected to all the different things you're doing about. And if you try some things and you succeed or you try things and you fail and they don't care either way, that actually, I think, is really grounding and probably a lot healthier. And so maybe that's what it is where we just need a diversity of social networks so that when you—if you have a close group of friends who really don't care about what you're doing professionally, then it makes it a lot easier to try things and fail or even succeed. They're like, they don't care about any of your successes. That also is very grounding too, because you're the same person no matter what.

And so maybe that's what we need just kind of in terms of how we think about our social—our relationship to work as well as, and I think the way in which we do that possibly is having that healthier relationship through having our social context be kind of different and orthogonal to our work and our profession.

What's Currently Obsessing Sam

Jim O'Shaughnessy: Yeah, I think that there's a lot of promise in that. And we have intentionally been building teams at O'Shaughnessy Ventures with that in mind. We want people who are really bright, but really not only humble, but fascinated by the interconnection between things. And those kinds of conversations get to be really interesting. We had one instance where we had a fellow who is a scientist trying to develop a very particular way to analyze poop, essentially. And she spent an hour and a half with my editor in chief at Infinite Books, Jimmy Soni. She came over to me afterwards and she goes, "That was maybe the best hour and a half that I've ever experienced in my life. Because he had me explain what I was doing to him. And then he outlined a way that I could market it or raise money for it that I never, ever thought about," right. And, and I'm like, "Really, you know, tell me more." And he—"I didn't even think of this quantified self Jimmy was bringing up. I didn't even think that type of person might be really interested in something like this." And she goes, "So, yeah, I've completely rewritten the deck, thank you, to my interaction with him."

So I definitely have seen value coming out of it more than once. So right now, Sam, what is obsessing you? What are you just like, "This is so cool."

Sam Arbesman: So, I mean, certainly I—I'm thinking about, I mean, given The Magic of Code just kind of came out, so I'm thinking a lot about that. But for me, I mean, I've actually been thinking a lot kind of broader than that, which is around—so The Magic of Code and thinking about kind of computing is not just a branch of engineering, but kind of this humanistic liberal art that connects to language and philosophy and biology and art, all these different kinds of things. But I've been thinking about what would it mean to take this sort of humanistic computing approach and take it really seriously. Almost within our education, whether do we need new types of curricula or courses or ways of thinking about this kind of thing?

And to be honest, I'm still trying to figure out what that means right now. And going back to my obsession with list-making, I have this long list of books and articles that for me kind of evoke that right aesthetic of these kinds of things that are kind of at this weird intersection. And I've actually begun collecting courses that I found online that have interesting syllabi that I think also evoke that same kind of sense. And I want there just to be more of that. I don't know if that's going to be a new field. I would not presume it's a new field, but I think there's something there in terms of thinking about that kind of thing.

So that's one area, another topic that it's not quite top of mind but it's something that's been just itching in the back of my mind for at this point probably years, which is I mentioned earlier SimCity. So SimCity was made by the company Maxis. It was made by Will Wright. He developed this company Maxis that made SimCity and SimEarth, The Sims. And the heyday of Maxis was kind of early to mid-90s and it was this weird moment when Maxis could be a game company that also was playing in the realm of complexity science and other weird sciences, but also building these strange things that were not actually games but were kind of just toys that were teaching people how to understand all these different models about the world.

And so, and I've been—and I write about this periodically and I'm talking to people about what would it mean? And is it even possible for there to be a Maxis 2.0? Could there be another company that built these—I don't know—new simulation toys? Or was the bridge between the gaming world and various scientific domains or kind of other esoteric fields or what. And was it maybe just this weird moment in time in the 90s that allowed this kind of thing to happen? I don't think that's true. I think there might be a possibility for a Maxis 2.0. I've been talking to a lot of people in the gaming industry and other areas to see just if this could even be a thing.

And I have no idea what it would look like, but I've just—I just constantly return to this idea of Maxis 2.0 as kind of this placeholder in my mind for something that I kind of want to exist in the world.

Jim O'Shaughnessy: Yeah, me too. Because what I love about that is games are fun. And when you know my kids were growing up, they're all adults now, but my daughter was absolutely, you know, just completely infatuated with SimCity. And she would play that for hours. And so at dinner I'd say "well what did you learn about it?" And I gotta tell you, she became this font of wisdom about things like "huh, I never thought about it that way." And the idea that you can learn while having fun, it just seems so self-evident to me. And, and in fact, I would love a 2.0 version of that because, you know, the—who knows, given today's tools, right. Who knows what kind of things we could come up with by, you know, just tinkering, just playing.

Sam Arbesman: Yeah. Could you—and right. Nowadays, could you just kind of build your own SimCity? But with whatever rules you want. And yeah. And yeah, there could be so many interesting things. And I—and certainly with the computational power we have at our disposal, you could just build unbelievable, weird, playful simulations. But I also, yeah, the kind of blending of education and gameplay. I mean, one of the—and with the original SimCity, I remember the manual, it had essays in it and it also had a bibliography and I pored over those essays and I read the bibliography. I convinced my mom to take me to the local university library and find some books there. It was amazing. And yeah, it was fantastic.

Jim O'Shaughnessy: Well, Sam, I know that I've been having a delightful and fun conversation when an hour and 45 minutes goes by and my producers start buzzing my cell phone saying, "Hey, Jim, this has been really fantastic." I love chatting about things like this with people as well informed as you. Our final question here is kind of a fun one, at least for me. And that is we're going to make you the emperor of the world for one day. You can't kill anyone and you can't put anyone in a re-education camp, but what you can do is we're going to hand you a magical microphone and you can say two things into it that will incept the entire population of the world.

Whenever their next morning is, when they wake up, they're going to think of the two things that you incepted in them and they're going to say, "You know what? Inspiration is really perishable." And unlike all the other times I had the great idea, I'm going to actually act on these two things. What two things would you incept in the world to make it one that you wanted to live in?

Sam Arbesman: Yeah. So I think the first one is probably not going to be that surprising. It's going to be, "You might be wrong." Going back to the ideas of intellectual humility, I think we just need that idea much more in our society. So I would definitely say that the second one maybe "use libraries more." I mean I grew up in libraries and public libraries were—that's where I learned so many different things and so many ideas and exposed to all these different books and it was just. They're amazing. And I feel like not many people use libraries as much anymore. And I just want more people to. Yeah. To be using libraries more. So "use libraries more." That would be my second one.

Jim O'Shaughnessy: You. You just incepted me because I, like you loved libraries and went to them all the time. When I moved to New York, one of my happiest moments was walking into the New York Public Library. And I haven't been using libraries nearly as much, so I'm gonna take. You have incepted me.

Sam Arbesman: Awesome. Yeah. When I was actually when I was very little, my father knew the best way to get a hold of my mom. He would call the library and say—and he wouldn't ask for her name, he would just say, "Can Sam's mom please come to the phone?" Because I was known because I was there all the time. I was this 2-year-old or whatever it was. And yeah. So I grew up in libraries. I still try to walk to the library almost every single day. Yeah, we—we just—we need to use libraries more. So that's great.

Jim O'Shaughnessy: I love it. I love it. Sam, thank you so much. This has been so much fun. People can find you on social media. I know you're on Twitter and all the other ones and they get your book everywhere.

Sam Arbesman: Yeah, you can get—yeah, get all my books everywhere. Yeah. Newest one, The Magic of Code. I actually don't really use social media that much anymore, but yeah, if you go on arbesman.net so just my last name dot net. That's my website has links to my newsletter. I do—I do a podcast with Lux as well and all the other weird writing things that I do. So I find it online.

Jim O'Shaughnessy: Terrific. Sam, this has been a joy. Thanks for giving me the time.

Sam Arbesman: Oh, thank you. This is fantastic.

Jim O'Shaughnessy: Cheers.


Leave a comment

Subscribe now

Share

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

复杂性科学 口头传统 认知多样性 科技史 大型语言模型
相关文章