Before we begin let me give you a short list of the meanings of the two most important words that I’m using in this particular piece:

reason -

  1. a cause, explanation, or justification for an action or event.
  2. the power of the mind to think, understand, and form judgments by a process of logic.

chance -

  1. the occurrence and development of events in the absence of any obvious design.
  2. a possibility of something happening.

variable -

  1. an element, feature, or factor that is liable to vary or change.
  2. (mathematics) a quantity which during a calculation is assumed to vary or be capable of varying in value.
  3. experimental variable - (statistics) a variable whose values are independent of changes in the values of other variables.

experiment -

  1. A test under controlled conditions that is made to demonstrate a known truth, to examine the validity of a hypothesis, or to determine the efficacy of something previously untried.

I will mark the words with a superscript like this: chance2, reason1 when there can be a confusion as to what meaning of the word I’m using in a sentence.

    There was no lack of interest in studying the relationship between explainable/causal and random aspects of reality surrounding us. Pretty much every philosopher of the past worth his salt had an opinion on this subject. Maybe that is why the relatively simple meaning of this duality is still so blurred and uncertain for many. I would like to summarize what I’ve learned in the past several years for myself in order to have a ready ‘cheat sheet’ for everyday use.
    First and foremost let me remind you that the Cult of Correlation (Galton, Pearson, Fisher) didn’t acknowledge the existence of cause and effect relationships in their data or, better to say, they assumed that the relationships within their data are infinitely complex and the observable parameters in their non-experiments (later about it) are infinitely complex aglomerations of parameters too. In this case of an infinite dimensionality one can argue that the cause and effect relationship is always ‘averaged’ over the set of contributing factors. The second part of their doctrine was a concept of ‘observation’ or ‘sample’ which is not an experiment where the observer makes an attempt of figuring out the causal dependence of the phenomenon under investigation by deliberatly changing the conditions of it and observing the result of such changes. The Cult of Correlation under a false pretence that they are dealing with situations where intervention is ‘impossible’ rejected these time sequences of events (during the experiment) as a valuable source of information about the behavior of the system under investigation. In this sense what they are talking about is always an imprint of a pre-historic animal in a rock - the timeless snapshot of the results of multiple infinitely complex sequences of events. Again, for the biological systems which were their primary concern this assumption can be deemed close enough to reality.
    The German Physicists at the turn of the 20-th century, on the other side, had reached the precision of measurement utilized in physical experiments sufficient for analytical predictibility of real processes, with some margin of error of course. In a sense they were solving the opposite problem of investigation of the transient behavior of systems that were intentionally simplified for the sole purpose of being analytically predictable. And here come the ‘variables’ - they are what we change at will in the course of experiment before collecting the experimental readouts/data. With them comes the fundamental goal of the desing of experiments - creation of a setup that would let you change the main parameters of the phenomenon under investigation independently. But as it turned out soon, and every physicist worth his salt knows it, that there are ‘memory effects’ in most of the real systems and what is happening in the system when you have the value of let’s say ‘temperature’ equal to A depends on the ‘path’ to this value, how was it achieved, what happened with the system and its surroundings before that.
    As you can see these two problems are almost exactly the opposite to each other, - one is the investigation of a static snapshot - the timeless State without any concern about its previous evolution in time and the involvement of the free will of humans; another one - the investigation of a dynamic response of a system designed for the particular set of observations to a deliberate change of one or several parameters (variables) by disregarding all the other changes in the environment.
    Here comes the real life with its ‘technology’ and ‘engineering’. The moment humans started trying to apply the scientific knowledge of phenomena to real systems, parts of which can not be deliberately manipulated for one reason or another the sharp boundary between the static world of Statistics with its timeless fossilised ‘samples’ and the lively beautifully predictable world Newtonian dynamics in a wig of obsolescence started becoming more and more blurred. The smoky combustion engines and long electric power lines sparkling when a thunderstorm comes refused to be neither of these two.

Later.

Reading:
  1. Wesley C. Salmon, “Four decades of scientific explanation”, 1989
  2. Rudolf Carnap, “The Logical Structure of the World and Pseudoproblems in Philosophy”, 1967.