Quantcast
Channel: Pierre Teilhard de Chardin – Footnotes2Plato
Viewing all articles
Browse latest Browse all 55

Evolution as Cosmic Cognition (dialogue with Richard Watson)

$
0
0

Matt Segall: Well, where do I want to begin? Richard, I know that you do a lot of work on evolutionary theory and evolution as a learning process or a cognitive process. While you have a lot of respect, if that’s the way I can put it, for Darwin’s theory of natural selection, it seems not to explain everything in the biological world that is there to be explained. So you’re trying to find other scientific explanations for the kinds of organization and behavior and, dare I say, consciousness and agency—things that traditionally have been difficult to get a scientific grasp on but are evident in the living world. I want to talk to you more about that. From my point of view as a philosopher, it seems to me that evolution, over the course of the 20th century, became less just a theory of biological speciation and more a cosmological principle, something that’s at play at every scale. There may be some general account of evolution that applies across all these scales from the atomic to the astrophysical and everything in between. But there is something unique going on in the living world. So, I guess when you think about evolution writ large—say cosmological evolution—how would you frame that in a sort of general enough way that it applies across all these scales? And what makes biological evolution unique in your view?

Richard Watson: Yeah, thank you. You’re right that I have some respect—that’s probably the right word—for what Darwin achieved in the theory that he proposed. But, as my research progresses, I’m less and less convinced that it explains the things it’s believed to explain. It definitely happens. It’s true that populations contain variation, some of that variation is heritable, and some of that heritable variation affects survival and reproduction. So, natural selection occurs. But to get from that to the complexity and diversity of life is quite a leap.

There’s something you mentioned about the sort of universality of selection. It’s invoked in lots of domains that aren’t biological. We can talk about variation and selection processes in technology, cultural practices, or cosmological constants. It gets invoked in lots of other domains. There’s another kind of universality which is more of an exclusivity claim. Dawkins claimed that natural selection is the only possible process that could produce adaptive organization spontaneously. That’s something which I think is plainly wrong, but it’s something that greatly limits our understanding of what kinds of natural phenomena can be adaptations and what kinds of natural phenomena can be adapted.

You also asked about what’s different about living systems. I think I’m comfortable with there being something special about living systems, but by a degree rather than categorically. I’m moving towards a view where certain kinds of physical systems can be cognitive spontaneously, but most physical systems aren’t very cognitive. The biological ones are special in that the depth of the cognition they exhibit is high and deep, whereas a bag of rocks doesn’t have much cognitive depth. So, that’s the sort of territory that I’m in.

Matt Segall: I’ll ask a simple question—maybe it’s not so simple—but when you use the word cognition, what do you mean? Is a cognitive system necessarily conscious or not?

Richard Watson: I’m referring to processes of memory, learning, and adaptive behavior. I don’t know what to do with consciousness. I feel that’s a failing, but I feel like I’m not alone.

Matt Segall: Oh yeah, no, you’re in good company there. I definitely agree with you that biological forms of organization are a matter of degree rather than kind, which I think is a somewhat unusual position to take. I just finished reading and reviewing a paper by Johannes Jaeger, Anna Riedl, John Vervaeke, and others about relevance realization. They strongly argue that there’s a categorical difference between biological organisms and anything in the physical and chemical world, including dissipative structures or non-living forms of self-organization. They’re arguing strongly against the idea that there could ever be an algorithmic or computational realization of biological agency. Vervaeke’s technical term is “relevance realization”—it’s not algorithmic, they say, and they give some pretty strong reasons for that.

But it seems to me that in an effort to distinguish the living world from the rest of what the physical universe is up to, they may help us understand how consciousness can arise in multicellular organisms with nervous systems and all of that. But it leaves us with a new explanatory gap, which would be that between non-life and life, namely the issue of the origin of life. How could a self—not just self-organizing but self-producing—system, an autopoietic system if you will, ever get started? The autopoietic theorists like to argue that actually self-production logically precedes reproduction, and so evolution by natural selection presupposes autopoiesis. There’s some kind of a non-evolved organizational principle that has to be there before this wonderful process of natural selection can lead to any form of adaptation and speciation.

So, when you make that kind of categorical division between life and non-life, the origin of life seems to become totally intractable. It’s just as hard as the hard problem of consciousness, really. When you say you think this is a matter of degrees, can you say a bit about how you imagine the origin of life? Do you have any favorite theories, or how do you approach that sort of problem?

Richard Watson: You want to start with an easy one. 

Matt Segall: I’m a philosopher, so whenever I get a scientist, I ask the hard questions—the basic questions, you know.

Richard Watson: Well, I wonder if I might parry that a little bit with the idea that if only we could get to self-reproduction, natural selection would kick in and then everything else would follow. I don’t think that’s right. I don’t think that it does as much explaining as it’s purported to do or people have faith in it doing. The idea that if only we could explain the origin of evolution, then the origin of everything else would be fine—I don’t think that’s quite the right way to look at it. And while I would agree that it might be true that a system needs to be self-sustaining before it can be self-reproducing, I’m resisting slightly the implication that we have to explain self-sustaining because we have to explain self-reproduction because if we did that, it would explain everything else.

I think that way of framing the question—what’s the thing that we’re trying to explain? We’re trying to explain why these particular things are the things that we see in the living world and not other things, right? Why are those things the things that are here? When you frame it that way, it automatically becomes a question of explaining why these things persist or proliferate when those things didn’t. The answer to that has to be because of their capacity to persist or their capacity to proliferate. It’s as though the question has been defined as which of the things survive and reproduce the best, as though that was the thing that needed explaining. But I’m interested to know how a process-relational ontology expert sees the thing that needs explaining because, for me, it feels more like the thing that needs explaining is processes of transformation, processes of change rather than invariance, rather than things that persist. The idea of if only you could explain how this thing persists so well when other things didn’t—that’s halfway there to explaining the complexity and diversity of life, and I don’t feel like that’s the case at all.

I imagine that a process-relational ontology perspective is less inclined to view the persistence of things as the thing that needs to be explained. Am I right?

Matt Segall: Yeah, that’s exactly right. You know, Alfred North Whitehead, a 20th-century process thinker, has this little book called The Function of Reason. He’s really focusing in on Darwin’s theory of natural selection. There’s a tautological way in which it can be framed where the organisms that survive today are here today because they survived in the past. From his point of view, what needs to be explained in evolution is why there should be this upward trend in complexity. If survival was the name of the game, he jokes, rocks are doing a far better job than any organism. Evolution doesn’t seem to be a single arrow towards some particular form of complexity, but it does appear that we’ve gone from simple single cells to more complex single cells to multicellular organisms and animals. There’s an increase in the sensitivity and fragility of organisms over the course of evolution. Survival power is not what’s being increased, so there’s something else that needs to be explained besides just what natural selection can account for. Darwin never really pretended to do anything else than offer some mechanism that could explain speciation. With some of the discoveries in molecular biology in the 20th century, the claim was inflated a bit to be an explanation for life as such. Darwin was never claiming that, you know. I think he felt there was more going on than his natural selection theory could account for.

Richard Watson: It didn’t explain speciation either, but that’s another topic.

Matt Segall: Sure, it’s part of the explanation, right? What needs to be explained is this trend toward complexity. It seems like there are other sources of agency at play here. Organisms are not passively selected by a fixed environment. From the get-go, organisms are engaged in the transformation of their environment. That doesn’t mean selection can’t happen—of course, selection is still happening—but the selection is coming from both directions at once. There’s some hidden factor here, hidden to reductive forms of materialistic science, that’s active in the living world and maybe, if life is a matter of degrees, even in the physical world as such, that’s driving this complexification process. It doesn’t need to be that there’s one global telos and we’re all being lured towards this Omega point, as you know Teilhard de Chardin would argue, but there do seem to be teloi—plural—at work, agencies at work. When you begin to factor that into evolution—and maybe this is similar to what you mean by cognition, memory, learning—there are forms of inheritance that are far beyond just the genes, far beyond even anything epigenetic that’s merely physical. There’s a kind of inheritance of what’s been learned that gets passed on to each generation. It will affect the genome eventually, but which direction are the causal arrows going here?

Richard Watson: I completely agree. It’s considered to be anomalous and exceptional that we recognize there are non-genetic mechanisms of inheritance, that organisms can modify their own selective environment, or that organisms have mechanisms of adaptive phenotypic plasticity in addition to their genetic adaptation. These things are treated as sort of add-ons to the core engine of natural selection, which is doing all the driving. Why aren’t they central? Well, because the engine of natural selection is the thing that creates all the adaptation. If the niche construction is adaptive, the phenotypic plasticity is adaptive, or the epigenetic inheritance is adaptive, it’s only adaptive because natural selection made it so. Otherwise, it’s just stuff that happens—physical things in complicated contexts with all sorts of feedbacks. They can’t contribute to adaptive complexity because there isn’t any way that anything can contribute to adaptive complexity in the natural world spontaneously without design unless it was done by natural selection. That’s the real roadblock, the way I see it. It seems obvious to me that natural selection isn’t the only mechanism capable of producing adaptive organization because systems that can learn acquire adaptive knowledge or adaptive organization that isn’t put there by natural selection. When you learn something, the adaptive knowledge you learn is not acquired through a selective process, and the credit for it being adaptive should not be given to the genetic evolution. It might be the case that the learning machinery was created by genetic evolution, but already that’s enough to recognize that learning is a different process of acquiring adaptive organization that isn’t the same as natural selection. It doesn’t have to be a natural selection process going on in the head in order for you to acquire adaptive information. It’s a transformational process. If we model it as a simple neural network, the changes in the weights are simply following a gradient—they don’t need to do variation and selection to find good wirings.

So, the question becomes not “are there any other processes of adaptation?” Instead, the question could become “what kinds of systems can learn?” Do all the systems that can learn have to have machinery that was itself created by natural selection, or can it be a primary source, a prime mover in adaptive stories? I think it can. The criteria for a system to be able to learn in a primitive sense at least are not onerous. You start with a system that can hold state—a memory. That’s trivial in physical systems. A bed of clay holds information about its past experience or interaction with the environment, but it only holds one memory. It gets overwritten by the next memory. The thing that begins to move along the spectrum of possibilities that we would call cognition is that the internal state that’s held from past experience can be held in a compressed representation that can then be brought back. It’s not a one-to-one imprint like a print in clay is, but it’s a change to the internal organization of a dynamical system that enables it to recall that pattern of shape and form which was a consequence of its interaction with the environment in the past. It’s able to recall that pattern of shape and form in the future or in a different context. That’s not difficult to do with physical systems of particular properties either.

Work I’ve done with Chris Buckley at Sussex shows that you can do the same kind of associative learning that you can do in a simple artificial neural network. You can do that with just a system of particles connected by springs, if the springs are slightly plastic—if the springs are perfectly elastic, then it can’t actually remember anything because it just goes back to where it was before. But if the springs are slightly plastic, then it has a dynamical behavior, but the dynamical behavior is affected by its past states. The changes in the natural lengths of the springs are mathematically isomorphic to the changes in the connection of weight strengths in a learning neural network. Not because it’s engineered to do that, it’s just because that’s what happens when physical systems are pushed by their environment. They give way under that stress, and they give way in the direction that makes them accommodate to that stress. It’s not difficult to get physical systems to demonstrate memory. If that memory is compressed, the springs are representing the relationships between particles, not the unique positions of particles. It’s also a memory that can remember more than one thing, a memory that can generalize so that it recalls other patterns from the same class instead of just the instances which have been seen in the past.

That’s memory and learning. The last step is can you get from learning in the sense of being trained to remember a set of patterns to adaptive behavior, which is the ability of a system to find configurations that alleviate stress better than a local hill climber would? When you push on a physical system, it gives way. If there are any constraints in the system, it gives way until it finds a locally optimal configuration and then it’s stuck. It can’t get to better configurations than that—better in the sense of alleviating the stress among its own components. But if it is subject to a set of disturbances over time and it has some of that plasticity to it like the springs that give way, then it’s able to change its dynamical trajectories in such a way that it gets better at finding configurations that alleviate stress between its components over time. It learns how to find configurations that are less stressful, which is another way of saying that it learns about the pattern of operations or it learns about the structure of its own environment. It becomes a model or a mirror of its environment, not just a print like the clay was but a deeper model that knows about the dynamical causal relationships among variables in the environment that it’s inducing into the internal variables of the system. All of that can happen without there being any natural selection involved. If you have a physical system with a boundary—a Markov blanket, as Friston would say—with internal degrees of freedom that can be deformed through their interaction with the environment, it will become a model of the environment. You don’t need any learning mechanisms that evolved for doing that or any natural selection involved to make the system do that. That happens spontaneously.

I think it’s more fruitful to think of cognition of that kind occurring spontaneously as something that comes first in living systems and the particular special machinery of doing self-reproduction that involves natural selection is something that’s derived from that as a particular way of storing information that has already been learned by the dynamics of the system. Genes are a way of storing information that’s already been learned rather than thinking of genetic information and natural selection acting on it as the engine that created the cognition. It completely turns it around. Does any of that make sense?

Matt Segall: Oh, yeah, absolutely. I want to lay out a particular scenario for the origin of life and see how you translate it into these terms. This particular approach is often called the hot spring hypothesis. David Deamer and Bruce Damer and colleagues have developed this. It involves life originating on volcanic islands in freshwater, actually, where at the edges of these volcanic pools that would be regularly dehydrating and then rehydrating when a geyser goes off, say. There would be all the lipids you need and the organic chemistry and RNA and everything there to create a situation where you get these spontaneously forming protocells that have packets of chemistry inside them during the wet phase. Then as the pool dehydrates, you get this sort of gel-like phase where that chemistry can mix. Their idea is that in repeated cycles of wet and dry, the dehydration allows for longer and longer polymers to form. You get a kind of testing. They do see this as a kind of selection process where those protocells that bud off when the pool refills with water, they bud off from the side with their own unique little collection of molecules inside. They’re being tested in that wet environment for stability. This process just iterates over and over again, and those packets of chemistry that survive come back into that gel-like phase and share their battle-tested wares with that kind of lipid network forming there. Then the water fills up again, and they bud off. Obviously, there is a kind of chemical selection going on here, but there might also be very basic forms of memory and learning going on. It sounds like we can combine your account with what they’re suggesting. You also get the added benefit of a kind of chemical selection here. These accounts seem compatible to me, but it seems like what you’re suggesting is another missing piece that could make this scenario more plausible. Any scenario for the origin of life is going to be more plausible if memory and learning are happening before you have a self-producing living cell.

The Hot Spring Hypothesis for the Origin of Life and the Extended  Evolutionary Synthesis – Extended Evolutionary Synthesis

Richard Watson: You can look at it this way. I agree that that’s a completely compatible story. If we take natural selection to be the engine that produces all adaptation and we’re, for the sake of argument, confident that after we’ve got natural selection everything else just follows, then the origin of life becomes a problem of the origin of evolution. Certain non-trivial machinery is required for a system to be self-reproducing, right? That doesn’t come out of thin air. Conversely, if we were to take a cognition-first theory for life where natural selection is doing the bidding of life, which is intrinsically cognitive, then we need machinery—we need to jumpstart that story. Where does the cognitive machinery come from? If we compare the stories for plausibility, it becomes one of what’s more likely: that cognition occurs spontaneously or that self-reproduction occurs spontaneously. If you have in mind cognition like ours that requires brains and neurons and can do the kind of reasoning that we do, then the answer seems hands down that self-reproduction is more plausible to occur spontaneously than cognition. But if you have in mind a more, shall we say, basal notion of cognition, the sort of cognition that all living systems do, that wherever we look in the tree of life we find abilities to do memory, learning, and adaptation that don’t involve neurons and brains. If we think about it that way and think about things like a basic kind of memory is just an ability to hold an internal state, a basic kind of learning is just a way to hold an internal state in a compressed way, and a basic kind of adaptation is just that that compressed information modifies the way you interact with the environment in the future, now it doesn’t seem so ridiculous that cognition could come first. It’s an empirical matter ultimately of whether it’s more plausible, less plausible, or what the actual fact of the matter was about what kind of machinery is more likely to arise or arose first. I’m pushing back against the assumption that self-reproduction is the sort of atomistic engine of all adaptation. I think cognition is more primitive than that.

Matt Segall: In autopoietic theory, the claim is made by people like Evan Thompson that is referred to as the so-called mind-life continuity thesis—that living organization is already cognitive down to the single cell. I don’t think Evan would want to go beyond that and say that cognition is occurring outside of that context. Maturana and Varela define autopoiesis in terms of operational closure. It sounds like the scenario you were describing earlier with just these balls and springs—there’s nothing like operational closure there. Nor is there self-production; the balls aren’t making the springs, the springs aren’t making the balls. They just found themselves in this situation. On their definition of cognition, there’s this operational closure whereby the environment can perturb this cell, but how that cell responds is based on its own internal dynamics. Further, they would say that this cognitive agency that’s realized as a result of this closure is simultaneously giving rise to a horizon of concern or a very basic form of subjectivity. Cognition in this sense presupposes a kind of concern for one’s own continued existence, a kind of precariousness that comes alongside this capacity to be a cognitive agent. It seems like the way you’re defining cognition does not presuppose agency. It arises spontaneously, as you say, and there’s no subjectivity there who cares about it. It’s just there’s memory here, there’s learning here, but it’s nobody’s memory, it’s nobody’s learning, it’s just happening in the environment.

Richard Watson: All matter is dynamic, right? There’s no real hard balls in space. The questions we’re asking are about the kinds of systems where that dynamism becomes relevant to the scales that we’re interested in. Usually, in physical systems, the dynamic nature of the atoms and the subatomic particles just isn’t relevant to anything that we’re looking at because it’s all disorganized and cancels out by the time we get to the scale of rocks, and they just look inert. That’s what we mean by an inert thing. But in living things, the different scales of organization, the different scales of dynamic process, are connected in a way that’s unusual in the physical world. What makes them interesting, what makes them living, is that the dynamical processes at one level of organization are connected to dynamical processes at many other levels of organization. That’s the property that makes the system able to hold on to information in a way that doesn’t look like an imprint. If it holds on to information at the level that it was given, like I give you some information at this level of organization, and you take a print of it at that level of organization, that’s not a memory, that’s just a print. That’s not an agent, that’s just something that was affected by its environment. But when the information at that level of organization is pushed down through many levels of organization, at the level of organization where it started, it looks as though it’s gone—like where did it go? It just dissipated away. But if you can bring that information back from those other levels of organization back to this level, then it looks like something that’s been recalled. It looks like something that, well, I’m trying to work towards how that’s connected to agency or a memory of something rather than just a physical thing that’s happening. It’s a physical thing that’s happening, and I did have a connection there. The connection was… it’s gone.

Matt Segall: Memory is a fleeting thing. Well, if it comes back to you, interrupt me. From a process ontological point of view, as you’re saying, yes, matter is dynamic. Actually, in Whitehead’s scheme of thought, you wouldn’t talk about matter anymore—you’d talk about creativity. When you ask, “What are things made of?” It’s made of creativity, it’s made of creative process. An atom is not a little BB but a kind of vibratory streaming of energy, and there’s a certain pattern that reiterates. Every element has its own vibratory frequency. I think it’s fair to say that there’s a kind of primitive memory at work already at that scale, allowing for the reiteration of a vibratory pattern. Rather than thinking of memory, a genetic reductionist would probably say, “Oh, when we talk about genes as a kind of memory, we’re making an analogy from our conscious experience of memory, and it’s really just molecules moving around that happen to be in the right dynamic patterns to produce the phenotype or what have you.” But if memory is just basic to material organization at every scale, I think that leads us to the position we both share, which is that living organization is a matter of degrees. The harnessing of this memory capacity that’s just part of how organization at any scale arises—in fact, organization becomes a function of memory already. I’m very interested in these sorts of accounts of evolution as a continuous process. Emergence plays a role where you get these kinds of leaps to higher-order forms of complexity, but the basic functions I don’t think could emerge. Memory being one of them, consciousness being another. I think most process philosophies are also panpsychist in orientation. There are some panpsychisms that are not process-relational, so it’s important to keep that in mind. The popular forms of panpsychism in analytic philosophy today are not process-oriented generally; they’re substance-oriented. So, you get this notion of consciousness being the “intrinsic nature” of an electron or something. That would not be a process way of speaking about it because the idea of an intrinsic nature—you’d say, “Oh, well, physics understands the relational properties of photons and electrons, and the intrinsic nature of those particles is consciousness.” So, you say a photon has consciousness. From the process point of view, that doesn’t make any sense because it implies a substance with properties. Process-relational thought is an alternative rendering of that where experience—the kind of experience that goes all the way down—is precisely more about this memory function. In the inorganic physical world, the predominant form of experience is memory. There’s a kind of conformal feeling, as Whitehead would say, where the past is reiterated. Even in the pre-biological physical world, there is a little bit of learning because the universe didn’t just stay plasma, it didn’t just stay hydrogen atoms. It learned how to do other stuff. Gradually, that capacity for memory becomes a capacity for learning, as you were explaining, adaptive behavior. The duration of a moment of experience and its capacity to not only remember what came before but to anticipate alternatives in the future begins to dilate as evolution advances. The system that is doing the remembering becomes more complex. You end up with a continuous arc of evolution in that case.

I know you mentioned consciousness earlier, and you maybe don’t have anything particularly helpful to say about it, but it seems to me that cognition already implies that there’s some kind of experiential dimension to this. What do you think?

Richard Watson: I wonder. I feel like I can go quite a long way without invoking questions of an experiential nature. I can talk about what kind of problem-solving my little particles connected by springs can do without talking about what experience they’re having of it. But I’ve come to a position where I’m not afraid of words like cognition and agency. I’m mostly just not ready for consciousness yet because I’ve gone through a process of recognizing how easy it is to be blind to the aspects of living things that are actually important. To ignore the fact that living things are agential, for example—to think of agency as a non-thing, that survival and reproduction are proper things, but agency, “we don’t need to talk about that, it’s not really relevant.” How can agency not be relevant when you’re trying to explain living things and how they arose? Having gone through that process, I have to concede that it may be as ridiculous to say, “Yes, I’m interested in living things and their agency, but for now, I’m just going to ignore the fact that I’m conscious and that you are too,” as though that’s not part of the question. I’m not looking at it. It’s like, “La la la,” you know? I know there’s something there that I’m ignoring. That’s an issue that ought to be on the table that I don’t want to put on the table.

Matt Segall: Well, you can’t do everything. As a philosopher, I ask questions like, “how is natural science possible?” In other words, how did there come to be organisms capable of reflecting on the nature of the universe? That obviously presupposes that we’re conscious agents. The fact that there is science at all presupposes that there are conscious, intelligent agents who are asking these questions, doing this research, and designing the experiments. It strikes me as odd that for the better part of the last several centuries, while we’ve really been perfecting this method, the dominant account of the universe has been more or less mechanistic, describing a universe wherein intelligent agents capable of doing science should not exist. We should not be here.

Agency is something that we all know and love. The idea of effort is at the very core of our everyday lives—getting up out of bed and deciding what to do with myself. I know there are plenty of people like Robert Sapolsky, for example, who’s written a popular book arguing that we’re all determined and agency is all an illusion. I think that’s a completely self-undermining, performative self-contradiction—those types of arguments. When we’re trying to understand life, we’re not just trying to understand some kind of neutral objects out there that we somehow exist independently of. We’re trying to understand ourselves ultimately, and we know ourselves from the inside out. If we bring our own existence back into the equation, something like panpsychism or what Whiteheadians usually call panexperientialism—because he’s not actually saying psyche or soul or certainly self-reflexive consciousness goes all the way down, just basal experience—becomes more plausible. 

There are different arguments one could make for that, but the basic idea is that we have one example of the inside of a physical system, namely ourselves. To be civilized, ethical, moral beings, we have to assume that the agency we feel is effectual, that it actually makes a difference in the world. Indeed, evolutionarily speaking, if this experiential aspect had no effect, it couldn’t ever be selected for. If it isn’t connected somehow to the behavior of organisms, then it’s even more mysterious. It’s just purely an epiphenomenal ghost and becomes even more difficult to explain. The idea here would be—the reason I would bring experience into this and suggest that it might go all the way down is because otherwise, we can’t account for ourselves as the ones who are asking these questions. In so many cases in physics over the last century, we’ve had to bring the observer into the equation, so to speak. I think increasingly, it needs to be the case in biology as well. Our first-person experience is relevant to the sorts of questions we’re asking about the nature of evolution. I know that you have to choose your terms carefully as a scientist when you do this research and that there are certain big-picture questions that are not even tractable to write journal articles about or do experiments about because they’re just too big. You need to specialize and zero in on tractable questions that can be answered in a lifetime. I have the luxury as a philosopher of not caring about whether the question is answerable or not—I live for the questions.

Richard Watson: An interesting question is good enough.

Matt Segall: Exactly. I love to be in dialogue with scientists doing this type of research and expanding the questions that can be asked within the disciplines that they work within.

Richard Watson: I remembered the thing that I was going to say earlier, and I think it’s still relevant. If you think about this kind of system that I’m describing as having a dynamic that holds a history of its past experience and recreates it—it’s sort of absorbing it and then pushing it back out again—there’s obviously a temporal dynamic to that too. It’s not just an informational pattern but a temporal dynamic of contraction and expansion of the pattern. It’s only going to be meaningful to the extent that something is happening in the environment that’s on a similar time scale so that it can resonate with it. There’s a pattern of activity that’s happening in the environment that induces an internal structure or organization within the system that we’re interested in that reproduces that temporal pattern of activity. If there’s something—that creates a system which has a structure caused by its interaction with the environment but also acts back on the environment at some point later in time. But if the activity in the environment is cyclic, if it’s periodic, then that can become trained in such a way that the response from the system appears to anticipate what’s happening in the environment instead of coming behind what happens in the environment. In other words, anticipation is just being the right amount of late. If you’re responding to something happening in the environment, response suggests that you’re the effect, not the cause, that it’s something happening afterward, not before. But if the thing happening in the environment is cyclic and you have just the right amount of delay, then it will appear as though—do I really mean appear as though or it really is?—as though the actions are being caused by things that haven’t happened yet instead of being caused by things that have happened in the past. I think that’s what we recognize as agency, that actions are being taken on the basis of the consequence of things that are going to happen—in other words, anticipating instead of non-agential artifacts which appear like they’re just being pushed around by their environment, they’re just affected by it rather than acting on future consequences. The future consequences become the things that cause the actions instead of the past causes. That relates to the notion of how you get from something purely physical in inverted commas to something that appears to be agential. The timeframe of how much into the future you can appear to be anticipating depends on the cognitive depth, the ability to hold that information from the past. Your ability to recognize temporal patterns over long timescales gives you an ability to anticipate future temporal patterns moving forward. That’s what makes things look less inert, less like rocks, and more like agents.

That ability of a system to engage, anticipate, remember, recall, act agentically is obviously relational, right? In one environment, it would just look like a rock. In another environment, it could look agential because it would depend on the kind of pattern it can learn and the kind of timescale it operates on. That’s really about resonance between the system and the environment. If you were to interrogate an agential system with rocks, it would not look agential. Conversely, I don’t resonate deeply with individual atoms—I can’t relate to the agency they have. I don’t know how they feel about each other, the kind of resonance they do with each other is quite profound, at least in so much as they participate in making biological organisms sometimes. In those circumstances, they do communicate with one another in a way that creates organization at multiple scales. That does say something. Does that say something about what it means to be experiential? Does it say something about how natural science is possible at all? I’m somewhat skeptical. I’m very skeptical that we see the universe as it is. We do make theories about the universe, and those do seem to work, but I think that’s because the only things we can see about the universe are the things that resonate with us. We’re only seeing the things which—everything we see is by necessity the kinds of things that have something in common and therefore fit together, are amenable to theory because they’re all things that were perceived by us. That’s what they have in common, and that’s why they’re amenable to theory. All of the stuff that doesn’t make sense, we just act like we can’t even see it. That’s just random, that’s just stochastic, that’s just a universal law that doesn’t have anything amenable to science in it, and we just ignore it, just blank it. That doesn’t speak to an experiential aspect, though. Does any of that resonate with you?

Matt Segall: Well, yes. I think a lot about, hypothetically, if we were to encounter an intelligent alien species, would their physics be radically different from ours? I suspect yes, but I would also suspect that there would be some way to translate between the two approaches, the two perspectives.

Richard Watson: Well, if we can see these aliens at all, then yes, sure.

Matt Segall: But we share a deep history in the sense that we’re part of the same universe. I very much appreciate the account you’re offering of this process of the internalization of environmental rhythms and how that can lead to a kind of anticipation. It connects quite directly to the question, “how is science possible?” Plato, in—I think it’s in the Laws, one of his late dialogues, and also in Timaeus—talks about how the periodic movements of the stars and planets actually taught human beings math. By internalizing the regular periodic cycles, the seasonal rhythm, the lunar cycle, tracking planetary orbits—which were a little bit more difficult because of this retrograde thing, which was a huge problem for Plato, he thought that can’t be right; there’s something else going on here that would explain that apparently chaotic wandering motion. But this is a human-scale example of a principle that can be generalized, as you were describing earlier, of this learning and adaptive behavior coming from anticipation as a function of the internalization of environmental rhythms. It links up beautifully with the origin-of-life scenario I presented earlier, which is also very much about the rhythm of this wet-dry cycling. Whitehead says life is rhythm, and it’s about the building up of cycles within cycles within cycles. It seems like you can get an account of what you’re calling anticipation without there being any experiential aspect here. It’s just about the internalized anticipation being the internalized cycle. The internalized cycle is slightly off the larger cycle it internalized, and it appears the cause and effect gets reversed. But on the other hand, if we understand experience in terms of whenever there’s a vibratory resonance, there is an aesthetic dimension to it—a kind of aesthetic contrast. Whitehead would say there’s no such thing as nature at an instant. The idea of a point-instant is a complete abstraction. Nature is made of events, and an event or an occasion is a duration. There’s a tension emerging between the just prior moment and the next moment, and it’s in that tension between past and future that some kind of aesthetic contrast is born, is achieved, and there is an experiential horizon to it.

Richard Watson: I think it’s easier to get to valence or affect than it is to get to experience. If I’m a system with many levels of resonating rhythmic activity internally, capable of holding information from the deep past that affects my future behavior, and something happens in the environment that causes a deep feeling of recognition, something that resonates deeply at many frequencies and many levels of organization, and in particular something that takes the grit out of my wheels, it adds a note to the song that I’m singing that makes the notes I’m already singing more harmonious, not less harmonious—that seems like something I would like. Whereas if I receive something from the environment with a sharp corner, something that resonates with some parts but not with others or causes the gears to crunch, that’s going to be something I don’t like. I’m just speaking intuitively. I don’t really know how to connect that up in any deep philosophical way.

Matt Segall: Just when we talk about dissonance and consonance or dissonance and harmonization, we’re using aesthetic categories. When we see vibratory cycling processes in nature, we’re seeing from the outside something we can measure in terms of frequency or whatever. But what is it from the inside? What is it like to be that? Why is it that nature seems to—why is it that we seem to prefer harmonization over dissonance? Sometimes dissonance can be interesting, but only because it’s part of a larger harmony that could be reestablished. There’s an aesthetic dimension that you don’t have to add for the math to work or to precisely measure these processes of vibratory reiteration. But we know from our own experience, from the inside out, that certain vibratory patterns are pleasing and others are not. There’s an aversion/adversion type of polarity that arises. Maybe William Blake was right about energy being “eternal delight” and that part of what’s driving this evolutionary process at every scale is this desire, which maybe initially was an unconscious desire but gradually becoming more conscious—a desire for more interesting music. Maybe that sounds too cute and pretty, but I think when we step back and take in the whole cosmic drama and our human situation, I really feel like we need better stories that are compatible with the science but that also allow us to connect our understanding of nature to our human lives. There’s a certain form of science right now that drains life of meaning. I wouldn’t even say it’s a certain form of science—it’s the way that science can be popularized sometimes as this heroic story of, we don’t need any meaning anymore because we just pull ourselves up by our bootstraps and get on with it. Reductionist science takes every aspect of our observations and experiences where we might have looked for meaning and takes them apart and then declares, “Nothing to see here.”

So, you know, I’m not embarrassed to say I’m a bit of a romantic here. The reductive method murders to dissect. There’s a way in which I think contemporary sciences of complexity and systems theory have moved beyond that in the way that practicing scientists are engaging with the public. There are more people telling these stories and offering narratives, but there still seems to be this very strong desire among some scientists to attempt what Whitehead refers to as “heroic feats of explaining away.” It seems to me that we can start with the simple, reduce things down to the simple components, and see how we can then add back up to the more complex. But there are also explanatory strategies that work in the reverse direction. Namely, like I asked earlier, how is natural science even possible in the first place? Let’s start with ourselves, this seemingly very complex animal species. Look at our capacities, and then maybe we can understand ourselves as an exemplification of what the universe is doing rather than a random, highly improbable exception to what the universe is doing.

Richard Watson: Yes, you don’t want to do unnecessary anthropomorphism, but you also don’t want to do unnecessary exceptionalism, just stay outside of the system and we had nothing to do with it. I’m very—it’s of deep concern to me the way that reductionist science and evolutionary theory in particular removes meaning, such that we don’t have any agency in our lives because agency isn’t a thing anyway. Even if it was, it would be an adaptation of natural selection, which is to say it’s just another trait that our genes make us do in order to service their survival and reproductive imperatives. Curiously, genes are allowed to have agency in that story, but nothing else is, which is very weird. If we want—that’s our scientific origin story, where we came from evolutionarily. If our scientific origin story says none of this has any meaning, you don’t have any meaning, there is no such thing as your own agency, you are not part of this story, you’re just a product of this story, that robs living things not only of their agency but it robs us of any meaning in that account, in that narrative. Science needs to tell better stories. It needs to tell stories that allow us to have meaning. To do that, it needs an origin story that doesn’t eradicate—not just, oh, it turns out that there wasn’t any—but that it was set up in such a way that there couldn’t be any. The reductionist method couldn’t possibly find any meaning. I’m not embarrassed about identifying myself as a romantic either. 

Matt Segall: It seems to me that there was a moment, maybe right around the turn from the 18th to the 19th century, when electricity and magnetism were being discovered, the true age of the Earth was being discovered, and Romanticism was being born in Germany and England. There was a moment when a different kind of science—maybe a more organic kind of science—was possible. But then in the 19th century, mechanistic materialism won out because I think it had more immediate industrial application and so economic payoff. There’s a whole alternative approach—a natural philosophy, an epistemology and even a methodology that’s there—that had been worked out in great detail and with tremendous rigor by people like Goethe and Schelling and others. It’s a view of evolution as purposeful and of the human being as, again, a natural expression of what the universe has been doing from the beginning.

Two hundred years of mechanistic science have not just led us to some theoretical dead ends. This view of the universe has had rather destructive practical consequences, both for human meaning and for life on the planet more broadly. We’ve treated the living systems of the planet like machines, and the result is mass extinction, climate chaos, and all the rest of it. There’s a real recognition of the need for a different way. It’s not that I think we should go back to the 1800s, but I think the sciences of complexity and complex systems were actually there in the work of philosophers like Schelling, who’s—I’m a big fan of Schelling—an early process philosopher

We can recover a lot of the philosophical underpinnings of a new kind of science. It’s there, and it needs to be applied. We need to ask different questions and do different kinds of experiments to legitimize it, but nonetheless, it’s there. It gives us a different picture of the natural world as not devoid of all the qualitative things that poets want to sing about. As Whitehead would say—Whitehead redefines nature, and he’s inheriting this Romantic tradition. He has a whole chapter in Science and the Modern World called “The Romantic Reaction.” He criticizes some of the Romantics for being a little bit too quick to reject science; Whitehead wants to reform science. He says, “Look, nature is what we are aware of in perception,” which includes as much the red hue of the sunset as it does the electromagnetic radiation the physicist would want to associate with that color. When you reframe that and allow us to overcome Galileo’s separation between primary and secondary characteristics, a new kind of science becomes possible. It’s like putting the human being back into nature and not still having this Cartesian idea that scientists, as observers, are outside the world looking in at it. That’s a kind of residue of deism, really. If we can put the human being—put the scientists—back into nature and recognize that the colors you see, the scents you smell, the harmonies you hear, these are all real facts in nature that are happening. If our theories can’t account for them, then we need better theories—a new approach to theorization itself, perhaps.

Richard Watson: We can’t be surprised by the catastrophic meta-crisis that we’ve created in the world if we tell people that life doesn’t have any meaning beyond maximizing your survival and reproduction. If self-interest is the prime mover in all of life, why wouldn’t you treat other people and the natural world as resources to be exploited? 

Matt Segall: I don’t want to dismiss all evolutionary psychology, but don’t get me started on the sorts of just-so stories that seem to me to be just thinly veiled justifications for the current political economy.

What do you think, Justin?

Justin: Other than very much enjoying the conversation, I find—you referenced Thompson’s deep continuity earlier, and I find there’s a nice continuity from my last conversation that was just uploaded five days ago with John Vervaeke. Vervaeke’s name came up with relevance realization earlier, and I posed this question to him—so, synchrony there—about the way I framed the question. I wanted to ask him as both a Neoplatonist and a cognitive science guy about that what I would call the singularity of abiogenesis, or I like what you and Tim Jackson call sympoiesis, which is derived from other philosophers. That moment where, like you said, Matt, in those pools of warm water with the lipids and the RNAs and other things converging and cohering. We had a good conversation about that. I won’t belabor it because I know you’re coming up on an hour and a half, Matt. But anyway, I just would really appreciate that this conversation arose and we were able to talk about that. John had some interesting insights. We were able to talk about that for a little bit. If we have enough time—what I thought was interesting from listening to both of you and knowing a little bit about your systems—in process-relational metaphysics a la Whitehead, there are the eternal objects. Richard, with your work, I’m thinking about scale invariance in cognition and also dimensionality and not quite supervenience, but the influence. In your lecture series, you talk about—have wires that illustrate shadows and projections out from various geometric shapes that can have that inductive influence. Maybe this is another conversation, and maybe I’m just doing a cheap segue as a host to say maybe we could continue if both of you are game. But maybe a couple of minutes on that, and then we can wrap it up.

Matt Segall: People come to Whitehead’s process philosophy expecting everything to be flux and change, and they discover that he has one of his two major categories is eternal objects. A lot of people are kind of put off by that. What the hell do you mean there are these Platonic forms? I thought he was a process philosopher. His eternal objects are different than Plato’s forms, though deeply informed by that original idea. It’s important to note that people will refer to Platonism and think of Plato as this kind of two-world dualist where you’ve got the perfect eternal forms up in heaven and then everything we experience in the physical world is a pale imitation of that. That is a view that’s given voice in some of Plato’s dialogues, but it’s important to remember that all of the best refutations of Plato’s ideas are also in Plato’s dialogues. Anyway, eternal objects are one of the two main categories in Whitehead’s metaphysics. The other is actual occasions, or actual occasions of experience. Eternal objects are Whitehead’s attempt to make sense of the function of possibility or potentiality. In contemporary science on this question, Terrence Deacon has this idea of “absential features.” I don’t know if you’re familiar with that, Richard, from his book Incomplete Nature? You could think of this as related to the idea of constraints. When you think about everything that’s already been actualized in the history of the universe, there are still some infinite set of alternative possibilities which could have occurred and which may yet occur in the future. For Whitehead to understand process fully, you need more than just actuality; you need more than just what’s already been actualized. You also need to make reference to possibility. This comes in—it’s both an attempt to account for our own experience, our capacity to anticipate, but it’s also coming right out of quantum physics. Heisenberg tries to resurrect Aristotle’s idea of potentia in order to account for what the wave function is suggesting. So having some concept of eternal objects or potentia or possibility allows us to avoid, say, the many-worlds interpretation of quantum physics where every single path is actualized. If you have a category like eternal objects, you can say, well, no, most of those paths are not actualized. They’re held in reserve. They’re held in potential. The realm of potentiality is there to be drawn upon by actual occasions. Whitehead’s not suggesting that eternal objects reach in and do anything, but as a mathematician, he couldn’t explain even something as simple as “twoness” without making reference to the eternal object of two. That’s a deep debate among mathematicians and philosophers about the status of mathematical entities and relations. But also just color, as well. It doesn’t need to be a mathematical eternal object. The color of red—for us to be able to recognize the same shade of red today and tomorrow, there’s something about that quality that seems like it participates in time when the conditions are right, but it’s not simply produced by or emergent from the given relations in a particular moment. It’s called upon when it’s needed. Eternal objects are fundamental to Whitehead’s metaphysics. They’re required to make sense of our experience of alternatives, of what Stuart Kauffman calls the “adjacent possible.” That’s his version of Whitehead’s realm of eternal objects.

Richard Watson: I don’t know if I’m finding the connection that Justin was looking for or not, but something comes to mind for me. When you think about a process of learning, it is necessary to identify an inductive bias because otherwise, out of the space of all possible models that are compatible with the data, if you draw upon them uniformly, they don’t make any prediction. Only when you have a specific model that’s compatible with the data, that makes a prediction, but that’s capable of making a prediction that wasn’t specified by the data. That was the inductive bias. It’s the aspect of the model that comes from something other than the data. What’s the space of possibilities where you want to draw upon those models? Because if you draw from the wrong space, if you draw from the wrong inductive bias, then learning isn’t possible—learning with generalization, I mean. That feels like—well, you’re letting something in by the back door or something mystical about—well, how did you get the right inductive bias, then? How do you explain that? On the other hand, in practical terms, learning works. Machine learning works. It isn’t difficult to find inductive biases that work. Parsimony is a damn good inductive bias that works very broadly. It doesn’t seem—it’s doable, it’s not magic, it works. But there’s still something there which needs explaining. It’s like, why does it work? Why is it possible to find inductive biases that make learning with generalization possible? The way that I make sense of that is that the system doing the learning and the system that is being learned about already shared something in common. They were already built from the same kind of stuff. They were already from the same kind of universe. That commonality is what made the learning possible, but that commonality is a—you just push the question down the road a little bit. Why did they have something in common? As soon as you’ve decided that you’re going to divide the universe in half, partition the universe into—well, this is the system doing the learning, and this is the system that it’s learning about. As soon as you’ve drawn that line, you’ve made that mirror. You’ve introduced some invariants that are always there wherever you draw the line. However you want to make that partition, you’ve made a partition. In so doing, you’re back in the same space where there are things that follow from that necessarily, that aren’t just particulars that need you to pick out particular observations in order to find them to be true. It’s there by virtue of making that partition. That’s what makes induction possible. That kind of division, symmetry, asymmetry, is something you can’t get away from. I don’t know if that feels like it’s reaching into similar territory or not.

Justin: Resonance facilitates recognition is kind of what I’m deriving from that. Maybe that ties into the probabilistic nature of the eternal forms, that they are that which facilitates understanding. Again, if you guys are game, we could come back, pin this, or move into the metaphor of song. We didn’t really touch that either. There’s so much richness, so much terrain to be traversed. If you guys are game, maybe we could look a few weeks from now, whenever your schedules are open, and try that and continue. This was a beautiful and rich conversation.

Richard Watson: That would be great.

Matt Segall: I enjoyed this. I was glad I got to pick your brain a little bit, Richard, and I learned to answer my questions. 

Richard Watson: I didn’t know what Whitehead had written about evolution, and I’ve only learned a little bit today, but it’s very interesting to me. I’d like to know more.

Justin: Well, maybe I can lead an outro with cautious romanticism. Richard, you said something to me in our conversation a couple of months ago, and it resonates with me so well, that love is making oneself vulnerable to know another. I really just expand that the other could be a book you read or somebody’s thoughts and ideas, not just any personal relations. Meaning is connection, in a sense, and making oneself vulnerable to know another to make that connection is good medicine. I want to acknowledge that that really resonated with me. Of course, Matt, your work—I’ve been following you for about five years, and I’ve had you on several times, and I really admire your project and what you’re doing. To the two of you, thank you guys so much. I hope this, to the YouTube audience, has impact and resonance for you all as well, because I know it did for me. I look forward to more if we can facilitate that and make it happen.

Matt Segall: Thanks for getting us together, Justin. 

Richard Watson: Thanks, Justin.


Viewing all articles
Browse latest Browse all 55

Trending Articles