Chance: its meaning and significance
Chance: its meaning and significance
I still have a question about fortune - the aleatory, chance. I'm not sure why it feels like a problem: I guess it's to do with understanding the mechanism by which potential becomes actual ... What about C.S. Peirce's tychism? Did that notion go anywhere - do evolutionary biologists still use Peirce's coinage? ... Karla Mallette
What particularly intrigues me is that chance "feels like a problem" not in one discourse community (eg literature) but in many. Within physics there is a long and continuing debate about where or not "God throws dice" (cf http://serendip.brynmawr.e
Chance, the aleatory, indeterminacy, stochasticity, tychism. Are these all the same thing? Are they expressions of existing limitations of human understanding, or features of what we are trying to understand? Can one answer this question with any certainty? Does it matter in science, in other forms of inquiry, in practical terms, in day to day life?
- Alternative perspectives on randomness and its significance
- On beyond an algorithmic universe
- Evolution/science: inverting the relationship between randomness and meaning
Some directions for future exploration:
- Ways of making sense of the world: from primal patterns to deterministic and non-deterministic emergence
- The limits of reason (Gregory Chaitin)
- Anti-determinism, tychism, and evolutionism (Charles Saunders Peirce)
- Bayesian probability
And some directions emerging from the conversation
- Godel, incompleteness
- Liar's paradox
Additional relevant materials on Serendip
- The magic Sierpinski triangle
- A voyage of exploration: find Serendip
- From random motion to order: diffusion and some of its implications
A group of nine faculty and students, with interests in biology, physics, chemistry, computer science, literature, and education were intrigued by the notion that "chance" presented similar issues in a variety of fields and interested in taking on the exploration of it as a transdisciplinary issue. For all of them, including several with immediate interests in evolutionary biology and/or Peirce, "tychism" (see "The issue" above) came as a surprise. The scientists present were also generally unfamiliar with the term "aleatory." Some explanation of Mallette's particular concern, the apparent randomness of survival of texts from old Mediterranean cultures, led to an agreement that issues related to randomness were indeed important not only in scholarly inquiry but also in the development of canonical knowledge and in the design of curricula and in teaching more generally. The notion that too great an organization of course content, in lieu of some context acknowledging randomness, left students with a sense of an inability to engage constructively with the material themselves was noted and put on the table for further consideration.
Some discussion of different disciplines, most particularly physics and biology, led to the recognition of a distinction between approaches that seek only to make sense of observations and approaches that attribute some broader meaning to the ways that one makes sense of observations, and a distinction between randomness as an acknowledgement of a lack of complete knowledge and randomness as an phenomenon in its own right that in turn could be used to account for other phenomena. The issue of whether either of these two distinctions entails the other was briefly discussed and added to a list of issues for further consideration.
With regard to the second distinction, randomness as a consequence of what we don't know and randomness as an irreducible unpredictability, a consensus seemed to be reached that neither of the two perspectives could be ruled out by existing observations and, more generally, that they would both remain viable given any conceivable finite set of future observations. It was noted that people do nonetheless in practice often choose between the two alternative perspectives and that such choices have implications for future action. Two additional possibilities were raised, that one could "toggle" back and forth between the two perspectives and one could treat the distinction itself as meaningless. The group agreed to explore further the reasons people had for choosing among these four different approaches and the implications for the future of each.
The notion that randomness as ignorance was a "natural" ontological position was offered, and countered by the arguments that not all people adopt it, and that cultural factors clearly can influence the choice. It was then suggested that randomness as ignorance led to "falsifable" hypotheses and hence further progress whereas randomness as irreducible unpredictability did not. The resulting discussion suggested that there was something significant in this argument but it required a little more specificity. Hypotheses involving some specific "stochastic" process could indeed be tested but because of the nature of stochasticity itself might require an finite set of observations to be fully falsified. This in turn led to the question of whether complete ("digitial") falsifiability was actually necessary for science (or for inquiry more generally). Could one conceive a kind of inquiry (scientific or otherwise) that would retain a form of evaluation and a measure of directedness/progress without complete falsifiability? Bayes, Schrodinger, and Peirce (see above) were alluded to in this regard, and it was agreed to take that question as the starting point for future conversation.
Gregory Chaitin's The limits of reason was the background reading for this session. A suggested background context was that Kurt Gödel established the existence of formal limitations of logical processes, that Alan Turing did the same for computability, and that Chaitin's "incompressible numbers" tie this line of development ("algorithmic undecidability) to randomness. It was suggested that a further link among the three is the dependence of each on problems that arise in self-referential formalisms.
Chaitin's Ω is a well defined infinite number sequence that has no internal pattern (is "random") and additionally cannot, as Chaitin established through a proof closely related to the earlier work of Gödel and Turing, be generated by any computer program shorter than the number itself. To put it differently, the number sequence exists but is not "computable" from any any set of starting conditions and rules; the sequence has no underlying explanation or reason. In the context of our discussion, the number is significant since there seems to be no way to get to it except by employing non-deterministic processes, ie processes that involve some element of randomness.
The significance of Ω depends on, among other things, how willing one is to accept its existence; the group didn't feel yet competent to adequately follow and evaluate for themselves Chaitin's argument. But it was agreed that, for at least some participants, Ω sharpens the question of whether science (and inquiry generally?) depends on an assumption that everything being inquired into must have well-defined and deterministic underlying causes. Must science (inquiry generally) fall into "one of two camps" with regard to randomness (it doesn't exist as a cause or everything is accounted for by it) or is there a "third space," one that can make productive use of some involvement of underlying randomness?
Among the issues posed for further discussion were
the relation between Gödel/Turing/Chaitin and "quantum computing." Does quantum computing get around the limitations of formal systems and, if so, how? (The Bit and the Pendulum: The New Physics of Information by Tom Siegfried is accessible and helpful along these lines)
the relation between "self-referential" and "recursive"
the necessity for inquiry of reducing observations to get theory
the necessity for inquiry of both self-referentiality and noticing limitations
the relation of noticing limitations to Peirce's triad of induction, deduction, and abduction
the issue of "causes" that may be neither probable nor computable and its relation to Aristotelian concepts of causation
the possibility that all meaning is statistical, that singularities lack meaning
It was agreed to invest more time in some of the details of the Gödel/Turing/Chaitin sequence, beginning at our next meeting with Gödel.
April 20 meeting background: Gödel's theorem and its significance
April 20 meeting summary (Paul)
Conversation largely focused on the usefulness or lack thereof of the suggested Gödel/Turing/Chaitin analysis of the limitations of formal systems (see first section of Evolving systems, Gödel's theorem, and its significance).
Many humanists think of Gödel as "on their side," ie they think of work of this kind as establishing what they already know, that formal/rational systems are not the only, perhaps not even the best, foundation for inquiry. From this perspective, its not clear that there is anything to be gained by a closer look at the Gödel/Turing/Chaitin sequence. Is there any reason to believe it is relevant to such humanistic perspectives? Is there anything there that would add to them, point in new directions relevant to them?
A different challenge comes from those who value the formal/rational methodology. Economists, for example, are familiar with Kenneth Arrow's "impossiblity theorem," a formal demonstration that a perfectly "fair" voting system cannot exist (with "fair" carefully and formally defined). Such findings (and by extension Gödel/Turing/Chatin), it is argued, show what cannot be done but have no broader significance. Limitations are limitations; there remains plenty of useful work that can be done using formal/logical forms of analysis. Significantly, here too the same questions arise as in the case of the humanistic challenge: is there anything to be gained from a closer analysis of instances of formal impossibility or incompleteness?
The development of non-Euclidean geometries was offered as an example of how formal analysis and a recognition of its limitations could indeed have productive outcomes. The formalization of properties of space as it is normally perceived into a set of axioms and methods for proving theorems led in turn to a recognition that by challenging aspects of the formal system one could achieve a more general understanding of what is meant by space, one that included spaces and properties not previously considered possible.
It was further noted that the issue of the values and limitations of formal/logical systems repeatedly arises in a variety of contexts even for those who are otherwise inclined to dismiss its significance. One example was the feeling among most academics that they need to "justify" grades. A second related example had to do with the "check out counter" phenomeon, a feeling (desirable or undesireable?) that one oneself had to justify one's activities at particular times and places by a formal accounting. Another was the occasional feeling among people with a commitment to rational processes (not usually openly admitted to) that they might actually need a "new religion." In milder form, it is the recognition that their own work in fact depends on occasional "flashes of insight" that they find it difficult to account for rationally/formally.
Against this background, it was agreed to proceed with a closer look at Gödel/Turing/Chaitin to find out whether it could indeed open new directions for not only science and humanities but for inquiry in general. Along these lines, it was noted that formal systems can play several different roles. They can be used to summarize empirical observations and, by so doing, suggest new directions for exploration. In these terms, demonstrations of impossibility or incompleteness are not a problem, and may indeed be a virtue. It is only when formal systems are treated as primarily anticipatory, ie as certain predictors of what can or will be that demonstrations of impossibility or incompleteness come to be seen as threats.
In looking more closely at Gödel/Turing/Chaitin, we will bear in mind the question of whether it is in fact "self-referentiality" that leads to "kookiness" (is self-referentiality inherent in the formal system or something Gödel added?) And that Gödel's proof does not itself show the existence of something beyond what is possible in formal systems, only that particular formal systems cannot exhaust the range of possibilities. It is Turing and Chaitin who went on to show examples of particular meaningful things of this kind.
May 5 meeting background: Gödel's theorem and its significance
May 5 meeting summary (Paul)
Some introductory conversation started with the notion that there is a bit of the positivist in all of us, as well as a bit of ... something else, and with efforts to better understand what that dichotomy is, why some people find logical thinking off-putting and others are attracted to it. Such a split occurs in day to day life and in many disciplines (analytic vs continental philosophy, for example), and is sometimes characterized as comfort or lack thereof with mathematics but seems to have a deeper origin. One suggestion was that it has to do with a distaste for versus tolerance of inconsistency. Another was that it had to do with a preference for formal systems versus a preference for more fluid, associational, and indeterminate thinking. It was more or less agreed that finding a better way to characterize "something else," a way that didn't depend on opposition to "logical," was a desired product of the continuing deeper exploration of Gödel/Turing/Chaitin. The issue of "consistency" in quantum logic/quantum computing was flagged for future discussion.
Also part of introductory conversation was what was meant by a "formal system." Baseball games and chess both have a starting position and proceed by a set of rules. The former, it was suggested, don't constitute a formal system because participants don't make choices "mindlessly," ie following a set of rules that exists prior to the choice and would always yield the same behavior for a given situation. Chess players may or may not make choices "mindlessly." Those who do may be quite successful but tend to be characterized by others as "technically sound" but lacking ... soul? This in turn led to concerns that the agenda of these discussions was aimed at establishing the existence of "mind" or "God." Here too it was more or less agreed that one desired product of the conversations was instead to come a better understanding of what exists (or doesn't exist) beyond the realm of the "mindless." For present purposes, a "formal system" is understood to mean a system with a well-defined set of presumptions and well-defined rules of operation that will always yield the same outcome for a given starting point. The rules of operation or "algorithm" is presumed to be deterministic, in order to assure both consistency and reproducibility.
We moved on to a closer look at Gödel's proof, beginning with his "Gödelization" or "arithmetization" of arithmetic statements. Such statements constitute a "countably finite" set, ie they are listable in a linear sequence like the natural numbers. While there are an infinite number of such statements, "infinity" is explicitly not understood as "everything included." The natural numbers (and hence arithmetic statements) are a subset of a larger infinities, such as the real numbers. This was established by Cantor using a "diagonalization" argument, essentially showing that any conceivable "countably finite" list left off some real numbers and necessarily continued to do so even when particular missing numbers were added to it. A similar argument for the existence of things beyond the "countably infinite" is at the core of Gödel's proof, as well as Turing's and Chaitin's.
The Gödel proof holds not only for arithmetic but for any formal symbolic system involving a deterministic set of elements and rules of generation with a fixed set of symbols and finite sentence length. English sentences (or for that matter books) according to this argument are also "countably infinite," and hence a complete catalogue of "expressible" human understandings in any language leaves open the possibility of additional understandings inexpressible in that symbolic system. At the same time, it was pointed out that the process of formalization can itself bring into existence things not previously conceived, the transfinite numbers being a case in point. To put it differently, formal systems can be thought of as not only demonstrations of the limitations of existing understandings and methods of generating understandings but also as a mechanism by which to bring into existence possibilities that didn't previously exist (as per non-Euclidean geometries mentioned above). In other words, formal systems should not be understood as constraints on what can be understood but as tools to expand the range of possible understandings.
As earlier noted, Gödel's proof provides a reason to believe there may be important statements/understandings outside the "countably infinite" number formally generated, but does not give an example of such a statement. One can still entertain the possibility that the Gödel limitation is a "singularity" that can be noticed but need not be regarded as a general problem. To address this concern, we'll look next at the work of Turing and Chaitin. Also not fully examined and to be returned to was the issue of where self-referentiality comes from and the role it plays in the limitations of formal systems.
May 19 meeting background: Between Gödel and Turing
May 19 meeting summary (Paul)
Discussion of Turing and Chaitin was deferred to a subsequent meeting in order to first consider a question that arose during the last meeting and subsequent discussion: the relevance (or lack thereof) of formal systems and their limitations for things other than logic, mathematics, and computing. See Between Gödel and Turing.
It was suggested (Between Gödel and Turing) that one could not "opt out" of the use of formal systems, that whatever one's familiarity with or attitude towards them the process of trying to reduce one's experiences to underlying "principles and rules" was an inherent part of thought in all people (a feature of one aspect of brain architecture common to all human beings). A parallel was offered between the Gödel limitation for logical systems and Wittgenstein's notion that human language was limited in its scope, that some things are "inexpressible". Hence, an appreciation of the origins and significance of the Gödel limitation is relevant for understanding human thought quite generally.
It was further suggested that one might think of the Gödel limitation not in terms of what is absolutely "expressible" or "knowable" but rather in terms of what is knowable/expressible given a particular formal system (a particular set of properties and rules). From this perspective, the Gödel limitation isn't an acknowledgement of any fundamental distinction between the expressible and the inexpressible but rather a recognition that particular formal systems themselves create a distinction between expressible and inexpressible and that the latter can become expressible by a change in the formal system. In short, the Gödel limitation is not an argument against using formal systems but rather provides a strategy for their more effective use: recognize the limitations of any given formal system and alter one's use of formal system to expand the range of expressible things. This might be done by altering a particular formal system and/or by making use of several different formal systems (see Forms of Inquiry). An appealing feature of this perspective is that it makes the distinction between the inexpressible and the expressible a function of the inquiry process itself rather than a characteristic of things "out there," assures that there will always be an "inexpressible" to inquire into since formal systems themselves contribute to creating the inexpressible, and encourages "chatter" rather than silence in the face of the inexpressible.
This suggested way of appreciating Gödel's work in a broader context in turn raised a number of issues, including that of whether it was actually a direct or only a metaphorical extension of Gödel's incompleteness theorem, and how it related to issues of self-referentiality, consistency, provability, and simpler logical systems for which the incompleteness theorem does not hold. It also raised issues about whether one does or does not need an "outside observer" to make the distinction between "expressible" and "inexpressible," the relation between that distinction and “Nature is what we are put on earth to overcome" (Catherine Hepburn in The African Queen), the preservation or loss of a line of "demarcation" between various forms of human activity (science, art, religion), the relation between human and "domain specific" computer languages, and the relation of human thought and the process of inquiry.
The expectation is that the exploration of a number of these issues, as well as of the broader significance of the Gödel limitation, will be usefully focused and advanced by talking next about Turing and Chaitin. Is a computer an inqurer? Is there inquiry other than "trying to get to the bottom? Is inquiry "creative"? Is the brain a computer? Are things things that brains can conceive that computers can't? And, if so, why?
June 2 background notes and background reading:
- Crossing the lines of science and formal systems to ... ?
- ten thousand questions!
- More on demarcation
And on from formal systems to randomness
June 2 meeting summary (Paul)
Rather than focusing on the limitations of Turing computability and its relationship to randomness, as originally planned, this conversation focused largely on formal systems, science, and the "demarcation" problem. Science, it was pointed out, should not be equated with "formal systems." Whatever role formal systems play in science, there is an at least equally strong reliance on empirical observations both to motivate and to test hypotheses/statements/expressibles. The limitations of particular formal systems are not necessarily limitations of science either in practice or in theory.
At this point, "science" and the demarcation problem (what distinguishes science from other things, including "pseudoscience") rather than "inquiry" became a matter of central concern (for continuing thoughts along these lines, see for example Parascience, Beyond demarcation, and related comments in the forum below). It was suggested that science is distinctive in being committed to "value free" inquiry and that this was important to avoid problems of "disbelief in evidence." This in turn provoked challenges about the actual practice of science as opposed to the aspiration, as well as about whether evidence was actually value free and whether a distinction between practice and principle was sustainable. The "pursuit of consistent, reproducible findings," it was suggested, itself introduced "values" into scientific aspiration/practice.
An inclination to demarcate not only with regard to science but generally was itself both attacked and defended. Demarcation tends to create antagonisms and power imbalances, and so can contribute to oppression. On the other hand, demarcation can contribute to the productive existence of multiple, specialized approaches to inquiry that can in principle be more productive than any single generalized approach. Perhaps an appropriate resolution is to accept the usefulness of demarcation but work to assure integration rather than conflict among the various specialized approaches. The suggestion here, as earlier with regard to formal systems specifically, is not to feel a need to pick between demarcated things but rather to value each for what they individually and distinctively bring to a larger task.
Perhaps the issue of the role of "values" in science, and inquiry generally, like the issues of the role of "mind" and "soul" raised earlier, can be illuminated by coming to grips (albeit belatedly) with Turing computability and randomness.
June 16 background notes and background reading:
June 16 meeting summary (Paul)
It was agreed that issues of the nature of science and of the uses and problems of demarcating science raised in the earlier discussion had not been resolved but that they might usefully be returned to in light of planned discussions of Turing, computability, Chatin, and algorithms/non-compressibility. Against this background, discussion began with the notion of a Turing machine and a univeral Turing machine. See "Computers as formal systems and their limitations" and the following section of Beyond Incompleteness II http://serendip.brynmawr.edu/exchange/node/7642). It was also noted that while the focus here, as in the case of formal systems earlier, is on limitations or "incompleteness, but that in the case of both formal systems and Turing computability, there are and continue to be an enormous array of useful things that can be done within those limits.
Turing computability was likened to formal systems in the sense that there was a fixed and finite starting condition together with a fixed and finite set of construction rules from which everything else followed deterministically. It was noted that the suggested parallel between was worth looking at more closely in finer detail to test it more fully and potentially to more completely understood the relation between the two. For present purposes, the presumed equivalence is, more or less, the Church-Turing hypothesis: that what is logically deducible is what is Turing computable and vice versa.
Among the initial questions about Turing machines was to what degree one could make such machines themselves the target of inquiry: could one "reverse engineer" a Turing machine, determine its starting condition and construction rules from observing its output? In some cases (reversible machines) the answer is yes (and there are energetic issues that might favor focus on such cases) but in most cases the answer is no. Most computation involves irreversible steps, with a given output resulting from more than one possible prior state. In addition, paralleling the problem of inference from finite samples, there are always an infinite number of ways a given output state might have been achieved. The latter problem might be overcome by starting with the presumption that the observed output results from some Turing machine process (rather than, for example, from a random number generator).
There was interest also in the issue of whether the "state of a Turing machine could be equated to a brain "state," and whether the "meaning" of an input changed for different Turing machine states, as one imagines the meaning of an input does from different brain states. The issue of the relation between brain states and Turing machine states is directly related to the comparison between the two we are headed for. In some ways the relation is close: each brain state, like each Turing machine state, has as a major determinant the previous state. In other ways the parallel probably fails (indeterminacy, no inner "experience" of state in the case of a Turing machine). The lack of parallelism will, for these reasons, probably turn out to be greater with regard to "meaning." The state of a Turing machine affects what it does for a particular symbol but it is not clear that there is anything comparable to the human sense of "meaning" in the Turing machine so it is not clear there is "meaning" in saying that changes in state of the Turing machine yield changes of "meaning."
With this background on Turing machines, the notion of the limitations in possible outputs was illustrated with respect to the halting problem. It was noted that the proof has close parallels to the incompleteness of formal systems and relates to the outputs being countably infinite rather than larger and that there are additional close parallels in Chaitin's proof of the existence of non-computable or non-compressible or truly "random" numbers but neither proof was closely examined. Discussion proceeded instead on to the question of the relation between formal systems, Turing computability and the brain (a problem outlined by Roger Penrose in his Emperor's New Mind).
In this regard, the question was posed of whether non-computable things have any practical significance as opposed to being simply mathematical/philosophical oddities. Are they of significance only because some (many) people don't understand that "twentieth century positivism should have died?" It was suggested that a disinclination to take non-computability/"causelessness" seriously was actually at the root of may contemporary problems, including the current oil spill. The assertion that someone/some thing must have been at fault, that there are ways to control all variables so that such things wouldn't happen, wouldn't be possible if people actually took non-computability/"causelessness" seriously, and this might in turn have a significant impact on energy policy discussions. A similar argument made by made with regard to the financial crisis; one needs to design systems not in an effort to preclude the possibility of unanticipated occurences but rather to minimize the impact of such events on the assumption they will always occur with some probability).
This digression in turn highlighted a long-term concern of the present series of conversations (and of the evolving systems project generally): in what ways would taking seriously the notion that things are never fully predictable, that they always have some element of indeterminacy, change not only approaches to energy and financial policy but also changes to the process of inquiry itself. There followed an intriguing contrast. On the one hand, an assertion that we are largely already there in practice, in "habitus." On the other, a reminder that we still routinely separate "theory" and "practice," as if we persist in the belief in two worlds, one ideal and deterministic, the other perhaps messier. Is there another way to think about understanding and inquiry, one that doesn't use that distinction? Offered as a possibility was recognizing that formal systems don't describe what is eternally and deterministically "out there" but instead exist "in here" as transient "stories," of practical use at any given time but always revisable/discardable. And that it is at least as much conflicts between stories, rather than conflicts between particular stories and any external "reality" that should be regarded as the driving force and rationale of inquiry.
This in turn offers a perspective for thinking about how brains differ from Turing machines and embody formal systems without being fully constrained by their limitations that will be returned to when the group next convenes in the fall.
Continuing conversation, in on-line forum below