„Science has limits“. This is a well-established truism—a Web search for „limits science“ leads to what science is obviously not meant to deal with: moral values, aesthetic values, religion and so on1. In this article, I will try to describe another limit of science, connected to logical paradoxes of self-reference and which has not received sufficient attention in my opinion.
The scientific worldview
Science seeks to explain to world, which means that (a) observed phenomena can be classified using a compelling and efficient structure and (b) a single structure provides the means to make useful predictions for the occurrence of future phenomena. Less verbosely: science both explains and predicts empirical phenomena.
To build such a structure, one has to start from two assumptions—one might call them „axioms of the scientific worldview“2:
- Empirical phenomena we observe exhibit some regularity.
- These regularities can in principle be unveiled by human enquiry.
I will also assume that these two principles imply that the scientific inquiry should be logically consistent, i.e., no inconsistencies should appear in the structure it uncovers.
Paradoxes of self-reference
What if the human mind itself is a subject of scientific scrutiny, which happens for example in psychology or neurobiology? Even more specifically, what if the scientific enquiry itself is scientifically analysed? I will show how paradoxes can appear when this is the case.
Imagine a community of neuroscientists in a distant future with an excellent model of the human brain, able to predict a great deal of human behaviour—the various explanatory and predictive success of this theory give them very strong belief in the validity of the theory. A little boisterous, they finally turn to explain their own scientific research with this very model.
Which type of answer can they expect? What if their model predicts that they should (or even could) not have come up with the model in the first place? What if it tells them that they will soon reject the model and adopt a radically different one? What if this different model predicts that they will go back to the first model? If the model is right, it is wrong and the same holds for the alternative model. When agents apply a scientific theory to themselves, logical paradoxes (can) appear.
The potential for logical paradoxes should be eliminated from science. As a consequence, scientific self-reference has to be excluded; at least one of the axioms of the scientific worldview does not apply to the scientist herself—either the scientist exhibits no regularities or (more reasonably) these regularities cannot be completely uncovered.
The limit of science thereby established is relative: its exact form depends on the field of enquiry and social scientists will face paradoxes differing from the ones of biologists or physicists when they delve into self-referential terrain. Nevertheless, the limit is still a priori in the sense that it will in principle appear at some point. I hope to clarify this by showing how the same problem appears in the formal sciences.
Connection to mathematics
The problem is not distinctive of empirical sciences—surprisingly, it also appears in mathematics and pure logic3. The self-referential proposition „I am a liar“ cannot be consistently evaluated to true or false; Russell’s paradox concerns similar paradoxical self-referential proposition about a sets.
More generally, Gödel’s two incompleteness theorems show that no sufficiently complex mathematical structure can be both consistent and complete. Some true statements cannot be proved within this theory. From this, one can construct propositions whose truth value cannot be assessed (and which are self-referential).
An example in computer science is the „halting problem“: will a Turing machine executing a given program halt or not? This question cannot be answered within the theory of Turing machines, because constructing an algorithm that answers would also allow for an algorithm that both halts and does not halt to be constructed4! The Turing machine whose program might halt or not is similar to the neuroscientists above, who are asked if their self-referential inquiry will give rise to a contradiction or not. Since they can never know whether a contradiction will be encountered (which corresponds to halting for the Turing machine) or not, they also face a similar form of the halting problem5.
A way out is essentially to distinguish different levels of propositions. Expressing a proposition about the truth of a proposition cannot be subject to the same truth predicate, but only to a higher-level one. This essentially amounts to eliminating self-referential propositions and replacing them by an infinite sequence of meta-propositions. The limit I described in the previous section corresponds to the one between the first and the second level of such a hierarchy.
Addendum (March 6, 2015): It was rightly pointed out to me that my description of the halting problem was not entirely accurate. I modified the description and added a reference to correct this. Furthermore, it should be stressed that self-referential statements have the potential for logical inconsistencies—not that they will always produce inconsistencies. If one is happy to live with the mere potential for inconsistency, then self-reference cannot (from this argument alone) be excluded from scientific practice.
Of course, there is always the tendency in the scientific endeavour to be „imperialistic“ to the point of providing explanations and guidance even for questions of value or of belief. For instance, normative evolutionary ethics seeks to provide a scientific justification of moral precepts. I think such approaches are deeply misguided simply because norms cannot (and certainly should not) be inferred from empirical state of affairs. ↩
This shows why the restriction is very general and does not depend on the actual agents conducting the scientific inquiry. Neither humans nor aliens nor Turing machines nor black holes can escape the issue of self-referential paradoxes. ↩