You are here

Models: Child’s Play and/or Scientific Tool!/?

Flora Shepherd's picture
Projects: 
”Machines take me by surprise with great frequency.” Alan Turing, found here. When thinking about emergence, my mind ping pongs between three different issues. The first two, my overwhelming distrust of Wolfram and explorations of the top down dichotomy, I will save for another post. Since we are still discussing agent based modeling, I will stick to my third fury. From the first day of class, I have been troubled by the idea that in making computer models, our objective “is to be ‘surprised’, to ‘surprise’ others, to establish that some pattern/phenomenon that is presumed to depend on complexity/planning/a directive element can be produced without that. To show what might be, rather than what is.” This has been a recurring theme in lecture and it just does not sit well with me. Why is it that models cannot be used to solve problems? Why don’t they portray what is? This rankles me. If the modeling method has no utility beyond surprise, then it is little more than an intellectual jack in the box: entertaining and beautiful but not appropriate for solving problems in a science class (see my icon? ). But modeling IS useful for more than just surprises. And I’m not the only one who thinks so. I went on a hunt for people solving real world problems with these models. I found this article quite helpful. They touch on a lot of the interdisciplinary aspects of modeling. In this study, modeling is used to evaluation very pertinent social problems.This group even focuses on applications of modeling in the social sciences. There are tons of ways in which models are useful for thinking about everyday life. After all, I guess that’s why we’re studying emergence in this class. To me, modeling is more akin to a pure thought experiment (The general relativity creeps in everywhere). A thought experiment, loosely defined is a way of testing a hypothesis without doing a physical experiment. A lot of special and general relativity relies on thought experiments used in place of actual experiments, since we do not have the ability to directly observe a lot of these phenomena easily. Like thought experiments and mathematics, aren’t models also used as a tool to extend the possibilities of thought? Do they translate into pure thought? After all, as I read in several social science sites, models also provide a way of simulating real world social situations. Are they the next best thing to a human (instead of particle) accelerator? All of these uses are fascinating and seem to me very important. At the very least, they widen the definition and purpose of models from the minimalist jack in the box. Maybe my problem with the idea of surprise is just semantics. After all, if Alan Turing (above) thought that computational surprises were worth mentioning, well then they probably are. But the importance of the surprise seems to me overshadowed by the importance of its implications. I definitely think that limiting the goal/definition of modeling to its ability to surprise is, well, limiting. It may be a characteristic of some models, but it’s not the purpose of the discipline. I'm afaid I may be obsessing over a detail, but the topic of surprise has come up so much, I just had to get it out of my head.

Comments

Doug Blank's picture

Personallly, I think that you can largely leave "surprise" with "purpose", "intention", and "free will." None of these are well-defined enough to make predictions, or even be useful as scientific concepts. Having said that, I do think that there is something behind "surprise" that the other terms don't have. "Surprise" is an unscientific way of addressing the point that, even though computer models are deterministic, they end up doing things that surprise us---that we couldn't have "predicted." (Of course we can predict them, given that we have the exact starting state, and we have already run it.) Computer scientists and computer programmers write programs and models every day. They always complain when they are surprised, because that usually means that they have a bug. Some think that if you are a better software designer, then you won't have bugs (ie, surprises). Baloney. I think the study of emergence shows that when you have sufficiently interesting interacting agents, then the system will undoubtedly have surprises/bugs. Reductionist tools won't help. In that sense, we don't have to try to write programs that will suprise us; we can't but help do it. If we could prevent surprises, then the system wouldn't do anything interesting. So, don't get hung up on the so-called objective. "Surprise" is a word with a lot of semantic baggage. Paul uses it, I suspect, to be suggestive, provacative, and make you consider this very different approach to writing software. You may appreciate it more after a long time of watching people trying NOT to write programs that surprise them---and utterly failing at that goal.
PaulGrobstein's picture

Rankled isn't a bad place to be, particularly when one writes about it and so gets some important issues out on the table where they can be examined more closely. Thanks for doing all that. Yes, of course, modeling can be "useful for more than just surprises", and modeling is frequently used as a tool to help "solve problems". My characterization of modeling was not intended to be a general one but rather, as Doug says, to encourage in our particular context a "different approach" (an additional one, not a replacement). That said, there are some more general issues involved that are perhaps worth making more explicit. My own interest in/involvement with computer modeling dates from the early 1990's (Grobstein, P. "Information processing styles and strategies: Directed movement, neural networks, space, and individuality", Behavioral and Brain Science 15: 750-752, 1992). At that time, as there had been before and will be again, there was considerable controversy about what modelling could and could not do. In particular, there were bitter arguments about whether the newly developed back-propagation networks were or were not "realistic", ie did or did not reflect how the nervous system actually does things. It was in that context, and while working with those models, that I began to realize that for many purposes "realistic" is not what one is looking for in a model. For several somewhat different but mutually enforcing reasons. One is that any given phenomenon can always be modelled in a number of different ways, so that exhibiting a model that achieves a particular outcome is never proof that what is being modelled actually does it in the particular way the model does it. A second reason is that given enough free parameters one can always find a way to model a given phenomenon, so exhibiting a model that achieves a particular outcome isn't even itself particularly surprising. For both of these reasons even people who use models to try and solve real world problems know (or should know) that they have to repeatedly calibrate their models against real world observations. Modelling is useful in working on real world problems but the final test must always be in the real world. It was after discovering and getting over the unhappinesses about the inability of models to tell us much about the "real" nervous system that I realized that models CAN do some important things "on their own", that they can cause us to think differently about what is necessary to make sense of particular phenomena: they can show that we don't actually need as many free parameters or as much complexity as we thought we needed. Yes, they are good "thought experiments", from which one can draw firm conclusions about how little one needs to account for some particular phenomenon, if not about what is "really" going on. I very much agree with you that the point of modelling is "to extend the possibilities of thought". "Surprise", as I meant the term, is the signal that such a thing might in fact happen. And, I agree, worthwhile if and only if it is in fact used to do that. An interesting question to think further about is what is the relation between extending "the possibilities of thought" and "solving real world problems". Are they different activities, or in same sense the same (see Science as Story Telling and Story Revising and Theory and Practice of Non-Normal Inquiry)? Thanks again for getting these issues out on the table. Looking forward to talking more about them.