Introduction to Evolving Systems: Beyond Emergence to Agency

Paul Grobstein's picture

The Emergence of Form, Meaning, and Aesthetics

Evolving Systems:
Open Conversations

Introduction to Evolving Systems: Beyond Emergence to Agency

Paul Grobstein
8 October 2009


The issue:

Computer models show that quite complex systems can develop from quite simple interactions of quite simple things, in the absence of any architect, blueprint, or intention.  Biological evolution, as it is currently understood, seems to provide a real world example of such emergent phenomena.  But biological evolution has apparently also given rise to human brains and with those seems to have brought into existence architects and intentions.  How can we think about systems that are both emergent, lacking an architect, and also intentional?  How do the two interact?  What implications does such hybrid character have for thinking about a whole host of human phenomena, including the process of inquiry itself? 

Background:

Excerpted from "From Complexity to Emergence and Beyond" (Grobstein, Soundings 90(1/2), 2007, available as a Word file)

Emergence is increasingly “in fashion” (Resnick, 1994; Holland, 1999; Johnson, 2001; Keller, 2003), as was complexity fifteen years ago (Waldrop, 1992) and systems theory (von Bertalanffy, 1968), cybernetics (Wiener, 1948), and unified science (Reisch, 2005) among other things before that (cf. Clayton, 2006).  In the case of emergence, as with its predecessors, different people are drawn for different reasons to a something that isn’t exactly the same for each person but instead reflects to varying degrees varying senses of dissatisfaction with existing paradigms for asking and answering questions.  In this important sense, emergence exists because of emergence – because of inquiry as an emergent process" ...

The human brain consists of a very large number of relatively simple elements (neurons) interacting in relatively simple ways by exchange of information across synapses. Hence, the brain’s function would be expected to have an emergent character to it. What seems to give the brain its additional function as an architect is a bi-partite arrangement (see Figure 3; Grobstein, 2007b) in which a series of model building elements (circuits of neurons) that interact directly with the world also report their activities to a second set of neuronal circuits that in turn use them to develop goals and alternative behaviors for the system as a whole. The former corresponds, more or less, to what may be conveniently referred to as the “unconscious” and the latter to “consciousness”. Behavior reflects continuing interactions between the two, and is normally an expression of the blending of the emergent and the architect ....

what is significant about inquiry is not only the degree to which its stories account for the past in particular local situations but how well it recognizes additional patterns across wider realms in its own activities and how “generative” it is, how effectively it contributes to the ongoing explorations of what might be that characterize the larger emergent process of which inquiry itself is a part ...

the business of inquiry is as much about creation as it is about discovery, ... we ourselves are a result of and a continuing participant in a larger process of ongoing exploration/creation, and hence should not expect ever to achieve either a definitive description or properties and rules nor a definitive predictability.  We should instead be at least content, and perhaps even exhilarated by, being ourselves creative participants in the very phenomena that we inquire into.

Additional relevant materials:

Discussion materials/notes

Variations of persistent pattern and explanations

  • Some patterns don't depend on an architect/designer who had those patterns in mind before they came into existence, are "emergent"
  • Other patterns do depend on an architect designer who had those patterns in mind before they came into existence, are products of "intention"
  • Both sorts of patterns are products of evolution

Evolution: from randomness through emergent pattern to intentional pattern

Implications for intentional inquiry

  • Patterns are not fixed but both perspective dependent and changing
  • There is no possibility of establishing general and invariant first principles and good reason to doubt they exist/are relevant
  • Intentional inquiry in general influences what is being inquired into, both directly and indirectly
  • Intentional inquiries are inevitably interdependent
  • Intentional inquiry should be as much about creation as it is about discovery/representation
  • Intentional inquiry should use a multiplicity of perspectives to generate new ways of making sense of things

 

One (non-linear, multiply-voiced) summary of the meeting:


 "Turning the Tables" (by Anne Dalke)

(from Shepherd's "Turning the Tables")

Paul began the discussion by picking up on a problem that emerged repeatedly in the five years of discussion held by members of the Emergence Group. The emergence paradigm "leaves something out": it gives no clear place for wrestling with problems of agency. "We need a way to use the emergence paradigm to take account of agency," to study and understand, in particular, the intersections of phenomena that depend on agency, with those that don't.

As we looked @ a series of six non-random patterns ("ripples"), in a variety of quite different realms ("earth, water, air..."), both living on this earth and extraterrestrial, Paul asked us to acknowledge that all but one of them were "explainable without an architect or a planner." These are "emergent patterns": each one came into existence without a pre-defined conception, and developed as the result of "simple interactions of simple things," without the intervention of a conscious designer.

But beings who look @ such images are inclined to see patterns.
There are no ripples in the absence of human observers.
What is the difference between production and perception?
(Tabled.)

We can account for much patterning without evoking a pre-existing template that brought it into being. This is a generalization of Darwin's insights, and the core idea of both evolution and emergence: there is no architect, no planner, no designer of the world in which we study and live. This emergence perspective can also be usefully invoked to describe a wide range of cultural processes, including the "invisible hand" of economics, the "contingent development" of history, and ongoing changes in art and literature.

BUT IT IS NOT ENOUGH.

We cannot always tell whether a particular pattern was intentional or not. It is a mistake to deny the possibility that sometimes intention DOES PRODUCE ripples. Where do these architects and planners come from, and how do they intersect with the forces of emergence? Our shared project here will focus on the intersection between purely emergent phenomenon and deliberate creation, and include adaptations that emerge in a creative sphere.

In evoking the variable of adaptiveness, it was suggested that a third category might be useful in discussing emergence, one describing those patterns which arise in adaptive response to their environment: not guided by an invisible hand, but guided nonetheless. ("There is no template for the stripes on a zebra." But lions might still be described as "agents" in this scenario. Although they don't have zebra stripes "in mind" while hunting, their acts of predation exert direct pressure, which contributes to the development of zebra stripes.)

Are we seeking an explanation of how or what? If the former, we need to consider the terms of causation. If the latter, is there really a fundamental philosophical distinction between pressure that is exerted intentionally, and that which is not?
(Tabled.)

Evolving systems is a "broader category" than emergence, one that encompasses the created as well as the emergent products of evolution. It stretches both further back in time than biological evolution, and further forward into the future.

But is this "evolved" distinction necessary? Both nature and art are products of evolution: one just arose later in the process. Why can't we say that an artistic process is "just" emergent? (Is this question "opposite" to the one about lions being "agents"?)
(Tabled.)

It is precisely the emergence of this "latter" process that interests us here. Against a background of randomness, new non-random assemblies (stars, solar systems...) developed. The process still continues, from the development of inanimate objects, through that of "model builders" to that of "storytellers," with the "prototypical point of evolving systems" being the "substantial production of non-random, improbable patterns in the absence of a designer."

Is probablity a technical term?
Organization is probable.
Particular organizations are probable.
In what various senses is complex organization probable?
(Tabled.)

The early changes that took place in "proto-evolution" ("before biology") were both persistent and path-dependent. Over time, an important new characteristic and particular implementation of this process emerged, in the evolution of living beings, which we now call "differential reproductive success," or "fitness." Five billion years ago, increasingly complex assemblies discovered a "special trick for persistence": the ability to represent their surroundings within themselves. Paul called such an "improbable" entity a "model builder," and argued that its capacity to use such "representations" increased the likelihood of its persistence in the world. The representations of the model-builder, by allowing it (for example) to anticipate and avoid obstacles, store that knowledge within itself, and change its position in the world, enabled it to "stabilize its own existence." (Examples of characteristics within an entity that allow it to anticipate its surroundings include sensory systems that identify food, and so alter the direction an organism takes to find it; or circadian rhythms that tell a tree to anticipate periods of increased sunshine.)  Organisms acquire information, in other words, that allows them to incorporate feedback mechanisms that affect their activities.

There's always a way for the sophist to get out of this, but:
how might we best characterize this transition?
The language of "representation" is particularly confusing, because it implies consciousness.
Might we replace the term "biological evolution" with that of "adaptive evolution"?
Is the transition point really about the storing of information? About self replication? About self assembly?
(Tabled.)

Active inanimate objects remained in the world, of course, as the model builders came into existence, and were influenced by them in turn. Once masked by scale, model builders became increasingly powerful causal agents. There was a "bidirectional causal relation among things": beavers affected the inanimate, and vice versa.

But how far back did this reverse arrow of causation go? How deep the reciprocal interactions?
(Tabled.)

Watch out for confusing semantics. We are not looking for foundational ideas here, but just identifing themes, giving "a sense of problems to explore." Inanimate objects gave rise to model builders, which in turn gave rise to the next "theme," that of storytellers: sensate, conscious and able to construct "counterfactuals," that is, to use models to conceive of alternate possible futures. Storytellers in turn became a distinctive force in the world, because they are capable of imagining circumstances that do not (yet) exist. They can play with contingencies.

This concept of emergence-as-incorporating-agency leads to a series of interesting practical problems. For example, the economic crash might well be better described in terms of emergent interactions, rather than intentional agency; or better yet, in terms of the interplay between emergence, intention, and efforts to conceive of counterfactuals. Existing prior to life, evolving systems now continue with a peculiar human point: the need to understand how change is produced both with and without architects, as well as in the interactions among those systems.

The practical point at the core of all interesting intellectual problems is this: how much of what we see is caused by intention, as opposed to "actually being out there"? Storytellers, the makers of counterfactuals, have "lost touch"; we can never know if our perceptions are "in here" or "out there." This is the intellectual problem of the human as storyteller. Do the patterns we see become patterns only in the presence of us as pattern-seekers?  "All that's out there" is "mild statistical regularities in noise." We "put names on it," and call them stories.

We can push this question further: as the undertaking of understanding emergence becomes more and more capacious, what is left out? Can any temporal systems prior to life be said to not evolve? "Is anything non-evolving?" Is emergence generalizable to everything? Is there no fundamental difference between rabbits and rocks?
(Tabled.)

Might emergence be described, alternatively, as "wholes being more than the sum of their parts"?  The emergent process is a contingent one, not deterministic, and so accurately described as "not reducible to its parts." But such a definition does not highlight the focus of today's conversation: the role of agency and intentionality in evolving systems. It is actually difficult for intentional agents to tell stories about emergent processes that are not intentional. Intentionality has a strong bias towards seeing intentionality as cause. There's a curious recursiveness or feedback loop operating here. We explain what happened, post hoc, and in making up the story, make it deterministic.

If we think about evolution as "a mechanism for transferal of information between states," we may have been "confusing the mechanisms" here. Some are random, some selected, some transferable. Alternatively, this conversation may have added a "fifth mechanism" to the traditional four means of biological modification: natural selection, mutation, drift and gene flow. Because of contingency, our intentions can have unintentional outcomes. Whether intended or not, this conversation may have had the effect of making us be less arrogant about our intended outcomes: so much of what emerges is both unintended and unavoidable.

Unavoidable?
(Tabled.)

 

 


Supported by the Metanexus Global Network Initiative, by Bryn Mawr College Center for Science in Society, and by the Serendip web site. 

Comments

Deborah Hazen's picture

"Maybe the kind of agents

"Maybe the kind of agents that we are, with the brains that we have, makes incapable of ever really matching deliberate action with intended outcomes, that this is a structural blindspot in human cognition."

I keep rereading this sentence. If we apply Paul's idea and consider that this "structural blindspot" may be a feature or asset, not a bug or failing, and we assume that it is either an adaptive or non-adaptive feature where do we end up?

Take the economic crash---there were people (few to be sure) who were sending up warning flares in every direction that they could but were ignored. Greenspan and others held an ideology that they were unable to let go of---that individuals would exercise agency in their own best interests and that those interests served the best interests and safety of the group. He later said that he was wrong.  If I assume that no one intended to bring down the economic system, then I am left with a loop of stories that influenced future agency, that influenced the next story, and the next act of agency...and paused only when a counter-factual event allowed a new storyteller to take over. No straight line between deliberate action and intended outcomes, rather a meta-story that caused a faulty post hoc story to be told about the effects that agents would have on the outcome. So some people told the lower-level story "if we create these innovative financial devices, we can make a lot of money" and the overarching story focused on the power of the free market, denying the ill effect of the lower level story.

Perhaps we have the kind of brains we have because the most efficient way we experience paradigm shifts to meta-stories is through the crashing of a looping story/agency model with a strong counter-factual? So the norm is the status quo, we make the shift only when faced with incontrovertible evidence (for some in some cases no evidence is incontrovertible). From a societal viewpoint it would be non-adaptive to embrace the counter-factual? So straying outside of the status quo is a mutation--some will be successful (maybe become the new status quo) but most will fail? Or have unforeseeable outcomes?

 

Paul Grobstein's picture

emergence and stories, individual and group

Nice example of the interplay between story and emergence, as well as of another direction that the intentionality/story teller story opens up: the need to distinguish between individual and group stories.  The group story that most people acted on, whether consciously or not, was indeed that "individuals would exercise agency in their best interests and that those interests served the best interests and safety of the group."  To put it differently, the story was that individuals should use their own stories and leave collective well-being to emergence.   And yes, the group story ("metastory"?) proved wrong.   And yes, there were individuals with individual stories who anticipated that.  The interesting question is whether we can come up with a new group story that is "less wrong" and that will be accepted as part of enough individual stories to make it effective, ie can/will individuals become comfortable with stories of themselves as well as group stories that make them more responsible individually for collective well-being? 

More generally, how do we think about the relation and interplay between individual and group stories? I do think this indeed has to do with "looping," with the ongoing process of having stories, both individual and collective, and using them not as final words but rather as a basis for exploration and further evolution.  Since there is some tendency to inertia in collective stories (as well as in individual ones) it is often the case that we don't change them without something "crashing."  I wonder if we could get better at using our capacity to conceive counterfactuals to at least sometimes avoid potential crashes?  At the same time, if what we're talking about is a genuine evolutionary process with a significant degree of indeterminacy, then its not that we, individually and collectively,  have particular "blind spots," but rather that there will always be things we don't anticipate/can't account for in terms of our existing stories.  In this sense, we wil always have "blind spots" and can perhaps learn to appreciate discovering them. 

Paul Grobstein's picture

moving beyond an evolving systems intro

Thanks all for rich conversation Thursday morning, and continuing conversation here.  Among other things, its wonderful to have Anne's list of "tabled" items for future conversation.   I very much agree that the details of the transition from the active inanimate to model builders and from there to story tellers are well worth exploring more fully.  So too is the relation between agency/intentionality and adaptiveness (they aren't the same thing at all: agency/intentionality are sometimes non-adaptive and things can be adaptive without being intentional), the relation between agency/intentionality and art (it can be either emergent or intentional and is usually both), the scope of evolving systems (yes, it seems everything is evolving, but not everying is emergent), and the place of emergence given intentionality (yep, there is always an "unavoidable" component to what happens; cf Unintended consequences, unconceived alternatives, and ... life).   

I'd add to the list the desirability of some further discussion of both "improbability" and "indeterminacy" (for the latter, see Evolution/science: inverting the relationship between randomness and meaning) and of reasons why the "whole is greater than the sum of the parts" (cf Complexity and Emergence and Some thoughts on similarities between brain function and morphogenesis, and on their significance for research methodology and biological theory).   Yes, there is a risk with both emergence and "indeterminacy"  of letting gods in the backdoor, of settling for quasi-mystic answers instead of exploring further for underlying phenomena.  But there are, on the other hand, ways of handling both emergence and indeterminacy skeptically and rigorously, and there is a risk of missing out on potentially useful lines of exploration if we rule either out of court at the outset.  

I'm glad too that there is some sense that the move from emergence to evolving systems, systems that give rise to and now include intentionality/agency,  might provide a useful framework for working on these and related problems.  Doug's bold-faced synopsis is very much on the mark and, yes, the point is the "behavior of a system" rather and not "the sensation of feeling conscious." I'd like though to replace Doug's "primary" with "previously existing" and his "forces" with "causal influences" to avoid some metaphysical baggage.  To wit:

"Agency" is a new thing in the universe, created from the same processes that have always existed, but it is more recent than previously existing causal influences, and it is now applying causal influences on the universe that are new in some fundamental way.

I'm glad of course to have Doug's skepticism as well.  "Will 'agency' and all of the language that goes with it (intentionality, story-telling, etc.) help with the understanding of these system?"  Yep, I think it will as long as by "systems" we mean the entire universe including human experiences of it, and we are willing/able to extend our repertoire of both observations and exploratory tools as needed for an enterprise of that scope.  We'll see.

Tim's skeptical concerns are equally welcome, particularly so since they come, I think, from a quite different direction than Doug's.  Yes, we experience consciousness and, with it, a sense that we are indeed intentional agents, but could the latter be an illusion?  Could our sense of ourselves as agents be a "blindspot"?  Or, maybe worse, could we be, like beavers (?), engaged without recognizing it in a mindless and ultimately impossible task of trying to remake the universe in our own image?  

I think it is obviously the case that "outcomes produced by intentional actions" are often "not what the conscious intent of story tellers was aiming for."  And fully agree that the intentions of authors (or painters or politicians) are much less than an adequate description of both why they did what they did and how it plays out in the world.  I am though much less inclined to say that "intentional actions" are therefore irrelevant.  I may not be fully aware of why I act they way I do, but that's not the same thing as saying that my conscious intentions play no role whatsoever in how I act or in the results of my actions.  Nor is it to say that I, inevitably, have exercised no choice in my behavior, that how I acted is fully and deterministically emergent.  

The key here is a set of ideas that Tim and I played with together several years ago, the notion that what comes with consciousness is the ability to create "counter-factuals," ways of making sense of the world and one's place in it over and above the particular one that one has at any given time.  By virtue of being able to somewhat non-deterministically conceive counter-factuals, story tellers have some ability to move the world around them in directions it might otherwise not have moved, and to move themselves in new directions as well. 

At least, so the story goes.  And what it in turn implies is not that we have no blind spots nor that we don't sometimes act as beavers, and certainly not that we can eliminate emergence, the unexpected or unanticipated, either in ourselves or in the universe of which we are a part.  But the story also implies that we have the potential to be an additional "intentional" influence both on the world and on ourselves and that we can cultivate that potential, in ourselves and others, by enhancing our capacity to generate counter-factuals. 

Is the story provably true?  No, of course not, no more than any other story is.  But it fits the observations and gives us some new territory to explore, the interface of the emergent and the intentional.  And that's all a good story can ever do.  As William James put it, "my first act of free will shall be to belive in free will" ... to see what happens.  Perhaps we can try the same trick with the intentional?  And, in so doing, get beyond both our existing blindspots and  our current senses of ways we want to  remake the universe? 

 

 

 

Tim Burke's picture

Mischief

Just to further distract: why be so sure that it's futile and impossible thing for the beavers to make a dam, e.g., for humans to reshape their local environments so that they are amenable to more linear understandings of intentionality?

I'm quite clear that this is a bad thing for the same reason that I'm clear that monocultures are bad (bad pragmatically because they are risky cul-de-sacs and bad philosophically & aesthetically because they prevent the emergence of novelty, innovation, surprise, and so on). But is it impossible?

Or to push it even further, impossible for what reason? It's one thing to say it's impossible because however much an organism reshapes local environments to its liking, there are other levels of environmental complexity both "above" and "below" that local which will eventually unsettle or destroy that management. It's another thing to say that from within a given attempt to remake the world so that it's compliant with linear, non-emergent views of agency will spontaneously arise new emergent forms because that's the way all systems inevitably are (e.g., that there are no linear or non-emergent interactions over time).

Paul Grobstein's picture

jackdaws, emergence, intentionality, and their inter-relations

Jackdaws, to say nothing of beavers, often display much more intentionality than they, or others, are inclined to acknowledge.  And that in turn suggest we should add to our list of tabled items a future conversation on the relation between intentionality per se and the more elaborated intentionality associated with a conscious awareness of counter-factuals.   In the meanwhile, Tim has raised a seriously interesting issue, antithetical to the one raised earlier by Doug and by Tim himself.  
The earlier concern was that intentionality/counter-factuals might disappear with a closer examination of emergence.  The current concern is the obverse, that with the emergence of intentionality/counter-factuals, emergence itself might become irrelevant.  As Tim puts it, perhaps "a more linear understanding of intentionality" would obviate the need for a recognition of the more compex and non-deterministic character of emergence?   
Human history certainly provides ample reaons for such a concern.   There are enough examples of commitment to "isms" of various sorts to make it clear that we have an inclination to "linear understandings of rationality."   On the flip side, though, there is, in human history, substantial evidence of a growing (?) resistance to such inclinations (cf TaoismRorty, Science and unconceived alternatives, Quantum physics and the brain).  Perhaps humans are reaching a point were we are prepared to recognize the limitations of "linear understandings of reality"  in general (cf Writing Descartes and Rorty, non-foundationalism, and story telling) and hence to acknowlege an essential role for non-deterministic emergent processes not only within our own brains (cf Variability in brain function and behavior) but, at least as importantly,  outside of ourselves as well?
Maybe humans will reach this point, maybe not.  Either way, what fuels my own optimism that non-deterministic emergence will not disappear as a significant force in the universe is that its impact has nothing whatsoever to do with whether humans appreciate it or not.  Stars will explode as super-novae, avalances will occur at unpredictable times, and markets will crash whether humans appreciate the impact and significance of non-deterministic emergence or not. To put it differently, humans will come to appreciate that their own actions, as well as non-deterministic emergent factors, play role in the universe or they will go extinct and hence establish for the future the limitations of intentional perspectives. 
I'm glad we have jackdaws in the world, whatever stories they tell of their own motivations. 

Doug Blank's picture

The language of agency

That discussion was useful. After all of these years of discussing emergent solutions, I think I am beginning to understand Paul's Perspective, and understand why this group was formed. To summarize the way I understand it: "Agency" is a new thing in the universe, created from the same processes that have always existed, but it is newer than the primary forces, and it is now applying forces on the universe that are new in some fundamental way. This hypothesis does seem to warrant exploration, and I'll participate. But I am skeptical.

The hypothesis is related to some other recent philosophical theories, such as The Hard Problem (en.wikipedia.org/wiki/Hard_problem_of_consciousness). But this is different, focusing on the behavior of a system rather than on the sensation of feeling conscious. Paul's point is a move forward.

But I am left wondering: will "agency" and all of the language that goes with it (intentionality, story-telling, etc.) help with the understanding of these system? Or, will it actually obscure what is really going on and hinder our ability to make progress in understanding these systems? I feel the latter, but I don't have a specific argument (yet). And this gives me a goal to work towards.

Tim Burke's picture

Intention as Beaver Dam or Blindspot

So, thinking about this sketch of things and wanting to move on a bit beyond towards some of the tweaking and poking and re-labelling that it often provokes when Paul lays it out (though this is also still a very useful activity).

Here's the interesting problem about the main class of story-tellers that we know about, namely, ourselves. Near the end, I was thinking that it may be that our minds, a product of evolution, have difficulty comprehending emergent and/or evolving processes in conscious or self-reflective terms, which is where our sense of experiencing agency or intention resides.

These are issues Paul has raised before with this group: human minds may govern human action in ways that are not conscious, and consciousness may be mostly a posthoc or emergent sensation.

But if we have agency or intention, we tend to think that's a product of consciousness or sentience. Our nonconscious minds may be perfectly able to operate effectively within a world full of emergent or complex systems; our conscious minds contemplating intentional action may not be very good at doing so. When we contemplate acting intentionally, we tend to construct the relationship between our planned actions and intended outcomes in a fairly linear fashion.

Social science, environmental science, etc. in particular mostly relies upon this: isolating a single variable or reducing a problem initially for the sake of comprehending a whole, then often forgetting that such a reduction or modelling has been performed while advocating an action intended to produce a new systemic outcome based on an alteration of the single variable. What then usually happens instead is something unexpected or unpredicted.

So here's a real problem that follows on this. Using Paul's terms, the intentionality of story-tellers unmistakeably produces contingent outcomes that affect both model-builders and the active inanimate. The feedback arrows he builds into his model seem absolutely right to me, and they are meaningfully contingent rather than deterministic (however much they may be the outcomes of a process which is deterministic at larger scales). 

However, we could argue that those outcomes produced by intentional action are never what the conscious intent of story-tellers was aiming for because our minds don't really understand how complex causality really works. A purely literary example: this is kind of what some varieties of postmodern theory are arguing about the relationship between the author and audience of a text--that the author may intend in creating text to have a particular effect on an audience, but that the actual consequences of the text being produced, circulating and read in the world across time and space produces many outcomes that the author did not and could not anticipate or intend.

We could go on and on and on with examples of declared intentions of agents that end up mismatched to apparent systematic consequences, before we even open the Pandora's box of whether intentions are just post-hoc stories we tell about non-conscious actions. Maybe the kind of agents that we are, with the brains that we have, makes incapable of ever really matching deliberate action with intended outcomes, that this is a structural blindspot in human cognition. (Makes you wonder about the arrow leading onward from story-tellers in Paul's diagram: is there a kind of information-storing or manipulating entity which someday plausibly could have agency which is compatible with and cognizant of emergent/evolving systems?)

-------

Or. Here's the other possibility. Is the intentionality of story-tellers more like a beaver's teeth? Meaning, do story-tellers progressively alter their environments so that those environments conform more and more to how story-tellers relate action and consequence? A beaver moves objects around in the environment in such a way that the environment changes to favor the beaver's reproduction, using a physiognomy which is adapted to making those changes. Maybe humans are engaged in a cumulatively more successful effort to remake the evolving systems which produced the active inanimate and then model-builders so that those systems are really or ontologically more and more responsive to linear understandings of agency. This is pretty much what high modernist science, policy-making and art imagined it was accomplishing up through the 1960s, though the language used to describe this goal was more about mastery, control, etc. over the universe. If you take this view, unexpected or unintended outcomes following on intended action are a sign of incomplete work rather than of the mismatch between story-teller intentionality and the "real" world of emergence and complexity. They're the leaks in a beaver's dam, yet to be plugged.

In this view, the universe is being steadily cut to fit the bed of the evolved mental preferences of story-tellers, to be an entity about which stories can be more readily and effectively told (though so far on an intensely local scale).

There are all sorts of possible meeting grounds between these two paradigms, of course. You could look at the second idea as a story that story-tellers like to tell to themselves, but a story which nevertheless has pretty powerful real influences on their actions in the world. Or you could argue that there are some kinds of actions where intentions and consequences can be forced to align through simplification or optimization of a system or structure and others where complexity is inevitable.

 

Paul Grobstein's picture

the need for a emergent/intentional evolving system perspective?

Thanks all for further stirring the pot this morning.  A few quick thoughts for further mulling by me, and anyone else interested ...

All this is of both practical and conceptual significance.  Among the current practical issues is the interplay between the emergent and the intentional in economics (cf Paul Krugman's "How did economists get it so wrong") as well as in health care, and politics/social planning in general.  And among the important conceptual issues is how to deal with the problem that our own understandings always have both an emergent and an intentional component to them. 

For both, the phenomena of "unintended consequences"/"unconceived alternatives" is of central concern.  These are generally regarded as "bugs" or "failings" from an intentional posture.  From a blended emergent/intentional posture, they might instead be regarded as "features" or "assets," in that they provide the wherewithal to continue the ongoing process of evolving.

Post new comment

The content of this field is kept private and will not be shown publicly.
randomness