Agents and environments - reflections ...

Projects: 
So, we've had a taste of cellular automata, and now of agent based models, and are about to go on to other things. Maybe its a good time to look back at our initial thoughts about emergence and what it is/might be good for, and reflect a bit on where we've gotten to so far?

My own thoughts (hoping others will add theirs):

  • CA's make interesting patterns, and are fine if one thinks the universe is deterministic and has a lot of time to do the computing needed to predict its future course
  • agent based models can make interesting patterns too but also do things that seem more interesting/immediately interpretable to humans
  • Langton's ant, among other things, suggests that pretty simple universes with agents need to be thought about in terms of systems with BOTH patterns and absences of patterns (the hardest case of Voyage to Serendip)
  • a challenge: is it so that there is a sequence of progressively sophisticated "emergent" models with progressively greater capabilities? how would one measure "capabilities"? can one imagine some undirected process that would take one from a more or less CA universe to a universe with agents and on to universes with progressively greater representations of purpose? where would "indeterminacy" or "computational irreducibility" fit in such a sequence?

Comments

BenKoski's picture

I buy the first three points. The fourth bullet is a very interesting thinking point, which I'm not sure that I'm prepared to answer quite yet. I do have a few immediate reactions, though (hopefully which make sense to someone):
  • Isn't Langton's Ant computationally irreducible? An algorithm to project out the behavior of Langton's Ant doesn't immediately occur to me, but that doesn't mean it can't be done...
  • If there are indeed computationally irreducible agent-based models, what does this mean? Is there a difference (particularly in fundamental function or utilities, or I suppose, "capabilities") between computationally irreducible CAs and computationally irreducible agent-based models? I would argue that there really is no fundamental difference in their "capabilities," since they are both producing the same complex, unpredictable behavior.
  • If this fundamental similarity in "capabilities" does indeed hold, then I can see how it might be possible to argue a transition from a CA-based universe to a agent-based one. In a sense, "agent-based" and "CA" might both be arbitrary categories that we are imposing on some sort of larger, more abstract emergent behavior.
AngadSingh's picture

This 'transition point' of sorts grabbed my attention a couple weeks ago when Paul was describing his conjecture (active inanamite > model builders > story tellers). Somewhere in that historical progression, must one postulate a transition to an agent based reality? This, of course, has implications on Wolfram's theory. What would this transition entail? There have been experiments that model origins of such agents (components of living things) (Miller & Urey in the 1950s is the most famous example - they modeled early earth conditions and witnessed the formations of amino acids). Their experiment models a transition that I find analogous to what Paul conjectures, that is the introduction of agents. In Miller & Urey's experiment, however, there is no evidence that the simple constituents operate under any different rules following their re-organization into proteins. But in agent based modeling, the agents operate under a different rule set than do the patches. In this sense, there is some discordance. Last week, I questioned if there were any limitations to using agent based modeling. I think what I've pointed to above may be a limitation. In the 1D CA, all spaces operate under the same rule set. This is, more or less, empirically true for Miller & Urey's experiment. In agent based modeling, however, the patches and turtles can operate under different rule sets. This may be a divergence from what is witnessed in reality (keeping in mind, I suppose, David's point on scope. If we zoom out, then agents (living things) do appear to be operating under different rule sets than the spaces they occupy).
Doug Blank's picture

I think that you are describing a subtle, yet important, limitation of agent-based modeling. By adding agents that follow different rules, we are cheating in the game of emergence. What we would like is to have agents without following different rules. What is the difference between a glider in the Game of Life, and Langston's ant?
Laura Cyckowski's picture

Thinking back from earlier, Professor Grobstein's second point is a new/better understanding of emergence for me-- systems with absences of patterns/randomness. As for the fourth point-- I found agent based modeling to be more interesting than CAs for the very reason that they seem to by pass trying to identify an "undirected process that would take one from a more or less CA universe to a universe with agents" and seem more immediately useful for showing what is/could be than trying to start from the beginning/bottom. I'm unclear how "capabilities"(function/utilities/sophisticated) is being used, it seems like it could be reduced to the same questions we asked about purpose. I guess one possibility could be to consider the number of "levels" in a system and how much influence their actions modify them in turn (I'm picturing the dotted-line from 'the group product' leading back to the lowest level). Also, was reading the article by Chaitin and got to thinking about a theory of everything and what it would actually look like. Once you have a nice small set of simple axioms & theorems that account for everything you want them to account for, what's stopping you from asking why those ones? At the end of the article Chaitin addressed the issue of physics versus mathematics and, I think, why mathematics should allow more empirical methods into the field... so, part of my question becomes, if you do reach some set of axioms/theorems it would seem valid to ask why those ones in particular and who's to say a different path couldn't have led to equally satisfying but different sets of axioms and so on etc.
LauraKasakoff's picture

Laura, I think you bring up a very interesting point about the "choice of axioms". This is something that bothers me philosophically and mathematically. When we think about theories of everything we are faced with restrictions because we are finite beings. That is, we are required to start somewhere, and so we succumb to "deciding" upon an axiom or group of axioms. Although, the term "deciding" may be misleading because how do you decide on a foundation? The term "decide" seems to imply that there was a reason, a thought process, behind the choice of axiom, but how could there be? If there was a reason behind an axiom, then it would no longer be an axiom. There would be something supporting it! Ew. Isn't that just painful? I like math because I feel safe in a discipline where everything is derived from something else, but when I am reminded that we are just blindly following axioms some famous dead mathematicians' started with long ago, I shudder. The thought that comforts me when I start down that unsettling path is: Maybe the choice of axioms does not matter. I believe that the world is one objective way no matter how many different subjective ways people may perceive it. Then, even though the human choice of a specific set of axioms may be subjective perhaps it is irrelevant if we use the same set of axioms to describe everything. I think it is okay to have any size set of axioms in a theory of everything so long as the set doesn't change from one proof to the next. Of course this is just my own personal philosophical safety blanket.
PeterOMalley's picture

You've hit on something we were just talking about in philosophy class: Pyrrhonian Skepticism. Its postulate is that no belief we have is justified, and here's how it goes: In order for a belief to be justified, it must have a reason that we believe to be true. In order for belief in that reason to be justified, it must also have a reason that we believe... and so on. If there is an infinite regress of reasons and beliefs, then there is no fundamental reason that any of them should be true, and the contrary of the original belief is just as justified as the belief itself. Alternately, if there is a fundamental reason that validates the original belief, then that reason must, by definition, not have a reason that we believe it to be true. As such, the contrary of the belief is still just as valid as the belief itself. Finally, the only other possible "belief structure" is a circular one, where the original belief ends up justifying itself. Besides being silly, it is also just as easy to construct an opposing circular belief structure where the contrary of the belief justifies itself. In conclusion, there is no rational reason to believe anything at all. I realize my quick summary there is probably unintelligible, but a diagram would really help... ;-)
Laura Cyckowski's picture

This isn't directly related to CAs or agent-based models, but just emergence in general and since we were talking about the brain last week... one of the most interesting hallmarks of emergence to me is the bi-directionality or "circularity" created, the power of a thing/entity that comes into existence via local interactions to in some way modify these lower levels. We were talking about antidepressants and mood last week in my psychopharmacology class and I said that the idea of reducing/equating mood/self to just chemical states of the brain and then trying to "fix" deviant states with antidepressants etc. was unsettling to me, as it is for many. Then I said that it was because of the way we were only allowing influence in one direction, without the possibility of a collective product (maybe in part determined from chemical states but from many others aspects as well) having the capability to exert it's own influence... anyway somehow I got labeled the dualist and "opponent", but I would think there would be a fundamental difference since in an emergent model the higher level is not separate but dependent on the lower levels, as is the activity of the lower levels "eventually" dependent on it's collective creation.
PaulGrobstein's picture

can be a dangerous thing. Yep, the bidirectionality/circularity is a big deal. Very relevant to psychopharmacology and pharmacotherapeutics. I had just been over this subject in Bio 202 and have added your point to the summary. Thanks. And it IS "directly related" to the CA/agent-based models issue, as per CA's and Agent Based Models, so thanks for that too.