Home > Uncategorized > Logic and truth in economics

Logic and truth in economics

from Lars Syll

Logic yields validity, not truth. - Post by Ziya on BoldomaticTo be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.”

But that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together.

The ‘analytical’ approach is typically used in economic modelling, where you start with a simple model with few isolated and idealized variables. By ‘successive approximations,’ you then add more and more variables and finally get a ‘true’ model of the whole.

This may sound like a convincing and good scientific approach.

But there is a snag!

The procedure only really works when you have a machine-like whole/system/economy where the parts appear in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we live in is not a ‘closed’ system. On the contrary. It is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy that you try to analyze remains stable/invariant/constant, there is no chance the equations of the model remain constant. That’s the very rationale why economists use (often only implicitly) the assumption of ceteris paribus. But — nota bene — this can only be a hypothesis. You have to argue the case. If you cannot supply any sustainable justifications or warrants for the adequacy of making that assumption, then the whole analytical economic project becomes pointless non-informative nonsense. Not only have we to assume that we can shield off variables from each other analytically (external closure). We also have to assume that each and every variable themselves are amenable to be understood as stable and regularity producing machines (internal closure). Which, of course, we know is as a rule not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, employment, etc, piece by piece doesn’t make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of an individual. It can come about only when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system and being a tribe-member presupposes a tribe.  Not taking account of this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in social sciences means that economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts.  In society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori cannot proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how models of that kind can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then impute these closed conditions to society’s real structure is an unwarranted procedure that does not take necessary ontological considerations seriously.

In face of the kind of methodological individualism and rational choice theory that dominate mainstream economics we have to admit that even if knowing the aspirations and intentions of individuals are necessary prerequisites for giving explanations of social events, they are far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that it is not possible to reduce to the intentions of individuals. Here, the ‘analytical’ method fails again.

The overarching flaw with the ‘analytical’ economic approach using methodological individualism and rational choice theory is basically that they reduce social explanations to purportedly individual characteristics. But many of the characteristics and actions of the individual originate in and are made possible only through society and its relations. Society is not a Wittgensteinian ‘Tractatus-world’ characterized by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity given outside of society, the individual (actor) and the society (structure) have to be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s it has been an essential feature of economics to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, really, in such a complex, organic and evolutionary system as an economy, that kind of independence is a deeply unrealistic assumption to make. To simply assume that there is strict independence between the variables we try to analyze doesn’t help us the least if that hypothesis turns out to be unwarranted.

To be able to apply the ‘analytical’ approach, economists have to basically assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects in such a way that the whole consist of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that the ever-changing contexts make it futile to search for knowledge by making such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and so not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influence real-world economies in situation A cannot simply be assumed to be understood or explained by looking at how X works in situation B. Knowledge of X probably does not tell us much if we do not take into consideration how it depends on Y and Z. It can never be legitimate just to assume that the world is ‘atomistic.’ Assuming real-world additivity cannot be the right thing to do if the things we have around us rather than being ‘atoms’ are ‘organic’ entities.

If we want to develop new and better economics we have to give up on the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours on proving things in models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant to predict, explain or understand real-world economies

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ is setting the economics aspirations level too low for developing a realist and relevant science.

Economics is not mathematics or logic. It’s about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by ‘legal atoms’ with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.

  1. ghholtham
    November 28, 2020 at 7:03 pm

    “What economics needs are real-world relevant models and sound evidence.”
    No-once can disagree with that.
    “Our knowledge accordingly has to be of a rather fallible kind.”
    Of course it is – very much so.
    “To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution.”
    Well, context is all. Our beliefs and expectations about a host of things cannot be treated in that way but if you are asked about a specific price the day after tomorrow, you can express your expectation in terms of a probability distribution, albeit one that is the outcome of processes more complex than you can specify. No-one can enumerate all the influences on a share price, for example, but you can buy options that in effect imply probabilities.
    “‘Human logic’ has to supplant the classical — formal — logic of deductivism if we want to have anything of interest to say of the real world we inhabit.”
    What does that mean? What is “human logic” and how does it differ from logic? If the statement means that facts are empirical and cannot be deduced from first principles, that’s true.
    “In this world, I would say we are better served with a methodology that takes into account that the more we know, the more we know we do not know.”
    I am not sure how we do that but to the extent that it means we should look at thing rather than just think about them, I entirely agree. It would be much better if macroeconomic models, for example, incorporated expectation formation based on empirical research into how people in different circumstances formed expectations rather than assuming they know everything they need to know. That would benefit from interdisciplinary work with social psychologists, which happens but too rarely.
    As for atomism, we can reject the philosophical individualism of marginalist theory but any theory has to start somewhere. There will be atoms, just different ones and they can interact in complicated ways. Many economists have worried for a long time about the right starting point. eg Brian Loasby “Time, knowledge and evolutionary dynamics: Why connections matter” Journal of Evolutionary Economics 2001.

    • November 29, 2020 at 7:12 am

      “There is one important difference between ordinary logic and the automata which represent it. Time never occurs in logic, but every network or nervous system has a definite time lag between the input signal and the output response. …it prevents the occurrence of various kinds of vicious circles (related to ‘non-constructivity’, ‘impredicability’ and the like) which represent a major class of dangers in modern logical systems”. [John von Neumann, 1987, p.554; note 44 in Mirowski’s “Machine Dreams”, p.141].

  2. Ikonoclast
    November 29, 2020 at 1:04 am

    I was so tempted to post a link to Supertramp’s “Logical Song”. It is a valid lament. Being taught to be logical represents a loss of innocence when we forget simpler human feelings and values. Interesting to note that in the lyrics the writer warns that refusing to become “logical” in the approved fashion will get your branded as ” radical, a liberal… fanatical, criminal”.

  3. Meta Capitalism
    November 29, 2020 at 3:59 pm

    We are always led astray when we want to introduce the measure of our world into the judgment of animal worlds. But I could argue that all of Nature takes part as a motif in the development of my personality, concerning my body as well as my mind. If that were not the case, I would lack the organs with which to know Nature…. Then, I am not a product of all of Nature but only the product of human nature, beyond which no knowledge is afforded me. Just as the tick is only a product of tick nature, the human being remains bound to its human nature, from which each individual always emerges anew. ((A Foray into the Worlds of Animals and Humans: with A Theory of Meaning (Posthumanities)” by Jakob von Uexkull, Dorian Sagan, Joseph D. O’Neil — https://a.co/5854Nmp )

    Our beliefs and expectations about a host of things cannot be treated in that way [as probability distributions] but if you are asked about a specific price the day after tomorrow, you can express your [subjective?] expectation in terms of a probability distribution [this elides the fact this is calling a subjective judgement which is little more than a rule of thumb guess, based on subjective belief a “probability distribution,” which in _real_ science has a precise meaning], albeit one that is the outcome of processes more complex than you can specify [in other words it is NOT a probility distribution but a guess or belief about X, albeit an educated one but a personal judgment nonetheless]. No-one can enumerate all the influences on a share price, for example, but you can buy options that in effect imply probabilities. ~ Gerald

    ~~~~~~~
    The meaning of the term “probability distribution” is defined by probability theory. Within the domain of mathematics it has precise a definition. To elide over the very different meanings as Kay notes below raises legitimate questions. If we are using the phrase “expectation in terms of a probability distribution” without preceding it with “subjective” (which is what an expectation is) is it seems to conflate a belief (hunch, guess, etc.) with a statistical probability distribution based upon real measurable data. To wit:

    When ‘John’, the CIA representative at the White House meeting, said ‘The probability that the man in the compound is bin Laden is 95%,’ he was not saying that on 95% of similar occasions bin Laden would be found there. And when people say of an historic but imperfectly understood event ‘I am 90% certain that the Yucatán asteroid caused the extinction of the dinosaurs’, this is not a claim that on 90% of occasions on which the dinosaurs became extinct the cause of their extinction was an asteroid landing in the Gulf of Mexico, but an expression of their confidence in their opinion. These assertions of confidence or belief – ‘the probability that Dobbin will win is 0.9’, ‘the probability that the man in the compound is bin Laden is 95%’, ‘I am 90% certain that the Yucatán asteroid caused the extinction of the dinosaurs’ – are today described as statements of subjective, or personal, probability. In this book we will use the term subjective probabilities throughout. The adjectives ‘subjective’ or ‘personal’ acknowledge that the assessment is not objective but a matter of individual judgement, and that different people may attach different probabilities to the same past, present or future event, both before and after it has occurred. (Radical Uncertainty: Decision-Making Beyond the Numbers” by John Kay, Mervyn King – https://a.co/adQdgaY )

  4. ghholtham
    November 29, 2020 at 7:31 pm

    Frequency distributions are objective properties of a data set but probabilities reflect human hypotheses about the future so are necessarily subjective. The hypotheses may be very complicated or very simple eg that the future will reproduce the same frequency distribution as the past. If the hypothesis leading to the probability distribution is generally accepted it might appear to be objective but I don’t think it is. It would be more accurate to call it intersubjective – shared by many people. I’m not sure I understand what an objective probability distribution could be. Perhaps it corresponds to the “ontological uncertainty” we have discussed before. I’m afraid I don’t believe in it. The universe is as it is. It is we who don’t know what’s going on.

    • November 29, 2020 at 9:04 pm

      If you would only take the trouble to get your mind round Shannon’s Mathematical Theory of Probability you may find that not knowing the future is not the only option in the real world. We can recognise errors as they happen (as the present becomes the past) and with a bit of ingenuity correct the error before it has time to take effect.

  5. Meta Capitalism
    November 30, 2020 at 12:07 am

    Frequency distributions are objective properties of a data set but probabilities reflect human hypotheses about the future so are necessarily subjective. ~ Gerald
    .
    The underlying stationarity of the weather, a simple game of chance, and insurance tables are in no way equal to or the same as the “subjective preferences” of people and their “animal spirits” and free will choices under uncertainty. Therein it seems to be you part ways with fundamental scientific knowledge and seemingly impose personal opinions that fly in the fact of science.

  6. Meta Capitalism
    November 30, 2020 at 12:21 am

    It seems in Gerald’s world there is no place for free will and human choice which are very different than the behavior of material realities on various levels. Kay et. al. go to great pains to describe the difference, and it seems what you are saying Gerald is you simply don’t see the difference and don’t believe it. I think this really just you expressing an unspoken mechanistic materialism as your fundamental philosophical view of the world and humans who inhabit it. In such a world everything is deterministic. As you have expressed the belief (hope?) with enough _new science_ we can turn everything into a solvable puzzle. This is not science but a philosophical viewpoint. Hence the endless never-ending fruitless round-and-round because the real _issue_ of one’s fundamental philosophical presuppositions are never addressed openly to be the reason for certain positions/viewpoints.

    Microeconomics, Macroeconomics and Complexity

    Since 1976, Robert Lucas – he of the confidence that the ‘problem of depression prevention has been solved’ – has dominated the development of mainstream macroeconomics with the proposition that good macroeconomic theory could only be developed from microeconomic foundations. Arguing that ‘the structure of an econometric model consists of optimal decision rules of economic agents’ (1976, p. 13), Lucas insisted that, to be valid, a macroeconomic model had to be derived from the microeconomic theory of the behaviour of utility-maximising consumers and profit-maximising firms. (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (p. 25). Wiley. Kindle Edition.)

    In fact, Lucas’s methodological precept – that macro-level phenomena can and in fact must be derived from micro-level foundations – had been invalidated before he stated it. As long ago as 1953 (Gorman, 1953), mathematical economists posed the question of whether what microeconomic theory predicted about the behaviour of an isolated consumer applied at the level of the market. They concluded, reluctantly, that it did not:

    market demand functions need not satisfy in any way the classical restrictions which characterize consumer demand functions . . . The importance of the above results is clear: strong restrictions are needed in order to justify the hypothesis that a market demand function has the characteristics of a consumer demand function. Only in special cases can an economy be expected to act as an ‘idealized consumer’. The utility hypothesis tells us nothing about market demand unless it is augmented by additional requirements. (Shafer & Sonnenschein, 1993, pp. 671–2) (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (pp. 25-26). Wiley. Kindle Edition.)

    (….) The textbooks from which mainstream economists learn their craft shielded students from the absurdity of these responses, and thus set them up to unconsciously make inane rationalisations themselves when they later constructed what they believed were microeconomically sound models of macroeconomics, based on the fiction of ‘a representative consumer’. Hal Varian’s advanced mainstream text Microeconomic Analysis (first published in 1978) reassured Master’s and PhD students that this procedure was valid – ‘it is sometimes convenient to think of the aggregate demand as the demand of some “representative consumer” . . . The conditions under which this can be done are rather stringent, but a discussion of this issue is beyond the scope of this book’ (Varian, 1984, p. 268) – and portrayed Gorman’s intuitively ridiculous rationalisation as reasonable:

    Suppose that all individual consumers’ indirect utility functions take the Gorman form . . . [where] . . . the marginal propensity to consume good j is independent of the level of income of any consumer and also constant across consumers . . . This demand function can in fact be generated by a representative consumer. (Varian, 1992, pp. 153–4, emphasis added. Curiously the innocuous word ‘generated’ in this edition replaced the more loaded word ‘rationalized’ in the 1984 edition.) It’s then little wonder that, decades later, macro-economic models, painstakingly derived from microeconomic foundations – in the false belief that it was legitimate to scale the individual up to the level of society, and thus to ignore the distribution of income – failed to foresee the biggest economic event since the Great Depression. (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (pp. 30-31). Wiley. Kindle Edition.)

    So macroeconomics cannot be derived from microeconomics. But this does not mean that ‘The pursuit of a widely accepted analytical macro-economic core, in which to locate discussions and extensions, may be a pipe dream’, as Blanchard put it. There is a way to derive macroeconomic models by starting from foundations that all economists must agree upon. But to actually do this, economists have to embrace a concept that to date the mainstream has avoided: complexity. (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (p. 31). Wiley. Kindle Edition.)

    The discovery that higher-order phenomena cannot be directly extrapolated from lower-order systems is a commonplace conclusion in genuine sciences today: it’s known as the ‘emergence’ issue in complex systems (Nicolis and Prigogine, 1971; Ramos-Martin, 2003). The dominant ?characteristics of a complex system come from the interactions between its entities, rather than from the properties of a single entity considered in isolation. (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (pp. 31-32). Wiley. Kindle Edition.)

    My favourite instance of this is the behaviour of water. If one had to derive macroscopic behaviour from microscopic principles, then weather forecasters would have to derive the myriad properties of the weather from the characteristics of a single molecule of H2O. This would entail showing how, under appropriate conditions, a ‘water molecule’ could become an ‘ice molecule’, a ‘steam molecule’, or – my personal favourite – a ‘snowflake molecule’. In fact, the wonderful properties of water occur, not because of the properties of individual H2O molecules themselves, but because of interactions between lots of (identical) H2O molecules. (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (p. 32). Wiley. Kindle Edition.)

    The fallacy in the belief that higher-level phenomena (like macroeconomics) have to be, or even could be, derived from lower-level phenomena (like microeconomics) was pointed out clearly in 1972 – again, before Lucas wrote – by the Physics Nobel Laureate Philip Anderson: (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (p. 32). Wiley. Kindle Edition.)

    The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a ‘constructionist’ one: The ability to reduce ?everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. (Anderson, 1972, p. 393, emphasis added) (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (pp. 32-33). Wiley. Kindle Edition.)

    Anderson specifically rejected the approach of extrapolating from the ‘micro’ to the ‘macro’ within physics. If this rejection applies to the behaviour of fundamental particles, how much more so does it apply to the behaviour of people?

    The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other. (Anderson, 1972, p. 393) (Keen, Steve. Can We Avoid Another Financial Crisis? (The Future of Capitalism) (p. 33). Wiley. Kindle Edition.)

    (Keen, Steve. Developing an economics for the post-crisis world. UK: College Publications for World Economics Association; 2016; p. 1. (Developing an economics for the post-crisis world; v. 3).)

    .
    When Lars speaks of “‘Human logic’” I am reminded of the real meaning of Uexkull’s musings in biology; that the total “environment” of each and every organism is an experiential world in its own wholeness. It is the failure of account for the fullness of human ability that leads to this eliding of over the difference between the stationarity of physical-material systems and thinking, feeling, biological systems and more importantly, human beings and the society they live with and the kinds of patterns they create. We are not particles but thinking, feeling beings who use emotions in decisions making just as much as we use reason. I think this is a perennial debate that has no resolution as these are really just two opposing philosophical views of reality.

    • November 30, 2020 at 8:50 am

      Good one, Meta.

    • December 1, 2020 at 6:01 pm

      Well and beautifully said. Even if it were the case that economists had managed to derive individual ‘demand or offer curves’ for individual ‘consumers’ —which they have NOT managed to do and which, frankly, cannot be donewhen relative prices between ‘goods’ change for many different reasons (among which is the reality that ‘consumers’ can and do reformulate their budgets when prices change)– aggregate populations of ‘consumers’ behave differently than individuals. A single molecule simply cannot exhibit Brownian motion: nor boil, nor freeze, nor vaporize.

  7. ghholtham
    November 30, 2020 at 3:20 pm

    Free will is a red herring in this context Does free will imply that your decisions and choices have nothing to do with the circumstances of your upbringing, your life experiences and your genetic endowment? Presumably not. These things are what make your will what it is. If it is unconstrained by anything outside yourself we call it free but that doesn’t imply it is acausal. If your will is so free it doesn’t depend on your nature in any way, in what sense can it be said to be yours at all? In any case it is an observable fact that the behaviour of human groups has some predictability in some circumstances. If we have a theory about a particular behaviour pattern we can sometimes express its forecasts in term of conditional probabilities. Note conditional because no theory can take account of everything and something outside the theory might blow the whole thing off course. The probabilities are, needless to say, entirely dependent on the theory. If it is wrong so are they. Such probabilities are a human artefact – intersubjective.
    The fact that uncertainty is the result of our lack of knowledge does not mean we can ever know everything. I have never said that. A universe complicated enough to evolve the human brain may well be too complicated for the human brain to understand. Some parts of the universe are so remote that light has not had enough time to reach us since the big bang. Evidently we cannot know anything about that remote part of the universe. That does not imply everything is uncertain there; it just means we have to be totally uncertain about it. The oddity to me is that people want to project their own uncertainty on to the external world. Uncertainty is a characteristic of human consciousness. The weather isn’t uncertain to itself. We are uncertain about it.
    This is fun but I am not sure what we are arguing about. People, not just economists, do apply probabilities to uncertain situations. Sometimes these are founded on a solid theory supported by a lot of stable evidence. Sometimes, as you say, they are guesses. In each case it is up to us to decide how much credence to give them.
    It is pointless to quote Lucas at me. I have never met him but I detest what he stands for in economics, methodologically and ideologically, I think there is sometimes a tendency for people in a debate to look for pigeon-holes to put their counterparties in. I can’t stop you doing it but don’t put me in that one!

    • Meta Capitalism
      November 30, 2020 at 11:25 pm

      Uncertainty is a characteristic of human consciousness. ~ Gerald
      .
      The basic indeterminacy in the atomic realm leads to a fundamental revision in the concept of Nature: Nature is probabilistic, not just our knowledge of Nature. (Weinert 2004: 59)

      With the indeterminacy relations, quantum mechanics imposes a limit on simultaneous precision of the position and momentum, the energy and time of quantum systems. (Weinert 2004: 63)

      Indeterminacy means that there is no reference frame from which a more precise determination of the location and momentum, the energy and time can be achieved simultaneously. (Weinert 2004: 79)

      Bohr agreed with Heisenberg that the indeterminacy relations were responsible for the collapse of the classical ideal of causation in the quantum world. As usual, the classical ideal of causation was seen as the complete determination of a physical system in terms of its well-defined properties. (Weinert 2004: 233)
      .
      I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming our, people still tried to understand them in terms of old-fashioned ideas (such as light goes in straight lines). But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damm good when ….” (Feynman 1984: 55-56, QED: The Strange Theory of Light and Matter.)

      The penetration of probability theory and randomness into the bastion of determinism is a fascinating story in its own right (Brush 1976, 1983; Porter 1981a; Prigogine 1980; Forman 1971). Initially, indeterminacy was introduced apologetically, not as an attribute of nature per we, but rather as an artifact of the level of our ignorance of the exact positions and velocities of all of the constituent components of a fundamentally deterministic phenomenon (Jevons 1905a, p. 198). However, once Pandora’s box was opened, there seemed to be no stopping the spread of probabilistic concepts in physics. Probabilistic gas theory led to statistical mechanics, which begat quantum mechanics, which begat probability waves, which begat nonergodic and weakly stable systems, all of which begat … a seemingly accidental universe. Once randomness was introduced into the theory of the external world, it spread like wildfire, or cancer, or crystallization of of a supercooled liquid, the choice of metaphor prudently contingent upon the attitude of the audience toward it’s seemingly inexorable advance. (Mirowski 1989, 63-64)
      .
      3.1 Understanding and Fundamental Concepts
      .
      Concepts like understanding and meaning are usually associated with a particular view of the Social Sciences. Social life produces and reproduces symbolic meaning. Social scientists need to acquire an understanding of the inherent symbolic meaning in social life. They do this, it is said, by adopting the viewpoint of a passive participant observer. In this view, the role of the social scientist is seen as distinctly different from that of the natural scientist. The object of study of the social scientist is society, the network of social interactions. Society does not exist outside the bracket of social interactions. The social sciences deal with the pre-interpreted world of the social participants. The social scientist interprets a social world, which already carries symbolic meaning. The symbolic meaning of the social world is produced and reproduced by the social actors. The study of the social world by social scientists is a matter of human subjects studying other human subjects. It is a matter of symbolic dimensions meeting other symbolic dimensions, a subject-subject relation. (Weinert 2004, 75)
      .
      The object of study of the natural scientist is Nature, the organic or inorganic material world. In this objective sense Nature is not a human product. But, in a symbolic sense, ‘Nature’ is a creation of human understanding. In their interaction with the material world, humans conceptualise Nature in an attempt to understand its functioning. Models, theories and laws are the result. They reflect in symbolic form what successive epochs understood by ‘Nature’. The concept of ‘Nature’ belongs to the category of fundamental notions with which humans represent the natural world. Humans also use fundamental concepts to explain how they, as humans, manage to comprehend the world around them. Do [sic] to this, humans employ such fundamental concepts as determinism, indeterminism and causation,time and space, mass and energy, motion and rest. (Weinert 2004: 75-76)
      .
      (….) Natural scientists face a pre-given natural world, not the symbolic, pre-interpreted world of the social scientist. Natural scientists stand in in a subject-object relation to their object of study. Yet they use symbolic language to make sense of the material world. Questions of understanding and meaning have been familiar to the scientific enterprise throughout history. Like the ancient Greeks, natural scientists face a complex of often bewildering phenomena. There is first the question of how the observable phenomena behave. The observational and experimental data reveal patterns of regularity. Then there is the intriguing question why the phenomena behave in such particular patterns of regularity. In an attempt to answer such questions such questions, the natural scientist aims at understanding and explanation. (Weinert 2004: 76-77)

      .
      I think the real issue is a difference in fundamental concepts the roots of which are in the realm of philosophical viewpoints.
      .

      [W]hat we have to deal with in the study of society and culture, indicates its purely intellectual difficulties, and shows how much easier are physics, chemistry or even biology. Even this, however, is not the whole story: for imagine how sorry would be the plight of the natural scientist if the objects of his inquiry were in a habit of reacting to what he says about them: if the substances could read or hear what the chemist writes or says about them, and were likely to jump out of their containers and burn him if they did not like what they saw on the blackboard or in his notebook. And imagine the difficulty of testing the validity of chemical formulae if, by repeating them long enough or persuasively enough, the chemist could induce the substances to behave in accordance with them — with the danger, however, that they might decide to spite him by doing exactly the opposite. Under such circumstances our chemist would not only have a hard time trying to discover firm regularities in his objects’ behaviour but would have to be very guarded in what he said lest the substances take offense and attack him. His task would be even more hopeless if the chemicals could see through his tactics, organize themselves to guard their secrets, and devise counter-measures to his maneouvres — which would be parallel to what the student of human affairs has to face. (Andreski 1973, 20-21, in Social Sciences as Sorcery)
      .
      There is no reason to deny the existence of phenomena known to us only through introspection; and a number of philosophers have pointed out the impossibility of carrying out Carnap’s programme (accepted as a dogma by the behaviourists) of translating all statements about mental states into what he calls the physicalist language. I would go even further and agrue that physics itself cannot be expressed in the physicalist langauge alone because it is an empirical science only insofar as it includes an assertion that its theories are corroborated by the evidence of the senses; and we can assign no meaning to the latter term without entailing a concept of self…. Thus you cannot give an account of the evidential foundations of physics without hearing and uttering ‘I’. And what kind of meaning can you attach to this word without using the knowledge obtained through introspection; and without postulating the existence of other minds within which processes are taking place which are similar to those which you alone can observe? (Andreski 1973, 21-22, in Social Sciences as Sorcery)

      .

      Free will is a red herring in this context. Does free will imply that your decisions and choices have nothing to do with the circumstances of your upbringing, your life experiences and your genetic endowment? Presumably not. ~ Gerald

      .
      Upbringing in a family or culture doesn’t cause us not to have free will only to hold biases to one degree or another in certain contexts. We are not billiard balls on a billiard table and Laplace’s deterministic demon has long, long, ago been shown to be inadequate to the task of understanding human decision make let alone quantum mechanics. The idea that human beings and their reflexive self-awareness accompanied with (albeit limited) free will brings into reality a level of uncertainty not found in physics.

    • Meta Capitalism
      December 1, 2020 at 12:36 am

      A revolution in our scientific understanding of the physical world occurred during the twentieth century. The upheaval revised our idea of science itself, and thrust our conscious thoughts into the dynamical process that determines our physical future. During the preceding two centuries, from the time of Isaac Newton, our conscious experiences had been believed by most scientists to be passive witnesses of a clock-like physical universe consisting primarily of tiny atomic particles and light, that evolves with total disregard of our mental aspects. Our conscious thoughts had, for two hundred years, been exiled from science’s understanding of the workings of nature. But during the first quarter of the twentieth century that earlier “classical” theory was found by scientists to be unable to account either for the observed properties of light, or for the plethora of new empirical data pertaining to the dynamical properties of actual atoms such as Hydrogen and Helium. A better theory was needed! (Stapp 2017, ix)

      In 1925 Werner Heisenberg, the principle creator of quantum mechanics, concluded from an analysis of the data of atomic physics that the basic precepts of the prevailing classical theory were profoundly wrong, and that the root of the difficulties lay in Newton’s ascribing to his conception of atoms certain properties that empirically observed atoms do not possess. The fist of these purely fictional ideas is that each atom has at each instant of time, a tiny well-defined location in 3D space. The second is that the evolving physical properties are completely determined by prior physical properties alone, with no input from our conscious thoughts. These latter “mental” realities were assumed, in the classical theory, to be completely determined by the physically described properties of the associated brains and nervous systems. Hence they do not, in that theory, constitute extra free variables. But in quantum mechanics the evolution of the physically described aspects is not fully determined by physically described properties alone: our conscious experiences enter irreducibly into the dynamics! (Stapp 2017, ix)

      The basic principle that guided Heisenberg to the successful new theory was that it should be based on properties that we can choose to measure. The choices on the part of conscious agents are “free choices” — from amongst the many possibilities allowed by the theory — of which measurement to actually perform. Here the adjective “free” means that the choices are not determined by purely “physical” laws alone. They are determined in part by irreducible mental aspects of psycho-physical observers. The observer’s “free choices” are thus non-physical inputs into the physical dynamics! Our minds are not mere side effects of material physical processes: Our Minds Matter! (Stapp 2017, ix-x) (Stapp, Henry P., Author. Quantum Theory and Free Will [How Mental Intentions Translate Into Bodily Actions]. Cham, Switzerland: Springer International Publishing AG; 2017; p. Prologue.)

      ~ ~ ~

      Free will is a red herring… Uncertainty is a characteristic of human consciousness. ~ Gerald Holtham, RWER, 11/30/2020

      .
      It seems Gerald you don’t know the difference between a logical fallacy and philosophical fundamental notions of science. I think this is where the nub of the different viewpoints exists. Difference in viewpoint in the fundamental concepts of reality. Kay et. al. are really just pointing out that in human experience there are things we simply cannot know and that probability distributions are of no use in determining our choices. On the one hand ME creates the caricature of homo economics while on the other some create the caricature of automata or Turning machines or fitness climbing ticks. Of course, the attempt to assign the umwelt of a tick to that of a human is a spoof and abuse of biology by epistemic trespassing. In reality humans have their own unique umwelt and it is that which Kay is trying to illuminate.

      In Alan Turing’s test of computer consciousness, a program that persuades us by its behavior that it is self-aware must be considered aware. I thus believe your foreign Umwelt is real because you persuade me as such. The alternative is solipsism. I can imagine, but not directly know, what it’s like to be you. (Jakob von Uexkull. A Foray into the Worlds of Animals and Humans: with A Theory of Meaning (Posthumanities) (Kindle Location 368). Kindle Edition.)

      .
      Intersubjective implies to subjective personal minds engaging in symbolic communication convincing each other they are real persons communicating something of meaning and value.

  8. November 30, 2020 at 6:16 pm

    Gerry writes: “Uncertainty is a characteristic of human consciousness. The weather isn’t uncertain to itself. We are uncertain about it.” And that I think really nails the fundamental difference between his Knightian uncertainty and my Keynesian uncertainty.

    Keynes famously wrote:
    “By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is
    known for certain from what is only probable …. The sense in which I am using the term
    is that in which the prospect of a European war is uncertain, or the price of copper and the
    rate of interest twenty years hence, or the obsolescence of a new invention, or the position
    of private wealth-owners in the social system in 1970. About these matters there is no
    scientific basis on which to form any calculable probability whatever. We simply do not
    know.” (Keynes, CW 14, p. 213)

    What Keynes here stresses is basically that for some (many) phenomena we just don’t have — and IPSO FACTO don’t know — any probability distributions. The reason we don’t have probability distributions for these phenomena is ontological. For these phenomena, the ontological basis explains (is behind) our epistemological uncertainty (we can’t know about things that aren’t there). When it comes to these phenomena “We simply do not know.”

  9. Gerald Holtham
    December 1, 2020 at 1:05 pm

    This comment fails to register for some reason. Trying again…..I think there is some genuine disagreement here but I am not sure there is much. There is certainly some misunderstanding.
    I am not saying that it is possible to apply well-founded probability distributions to every uncertain situation. Complete uncertainty exists; of course it does. We can all agree about that. I think we can also agree that there are situations where on the basis of some theory it is possible to assign conditional probabilities to outcomes – and this is routinely done in practice by insurance companies for example. A disagreement about when it makes sense to do so usually comes down to a dispute about the usefulness of the particular theory someone is using to assign probabilities. Those probabilities are artefacts of the theory; they are “real” only in the sense that the theory is useful.
    Where there is complete uncertainty, I go back to my threefold classification: things where we know nothing but might know more one day; things where we can never know because of the structure of reality and its information flows; things that are uncertain because nature is random. The last two categories are the ones that seem to preoccupy this discussion. I am not saying category two – things we cannot know even if they are determinate – does not exist; of course it does. But it is an epistemological fact; we shall remain uncertain even if the universe is completely determinate. I think it is what Lars terms Knightian uncertainty and there is no dispute about it . For some reason that I do not understand people want to insist on the third sort – randomness and indeterminacy as part of nature. Is that what Lars means by Keynesian uncertainty? I am not sure but I make three points about that.
    It is in practice impossible to tell category two and three apart. If we cannot know something we cannot tell whether it is random or not. What practical implications does the distinction have? It is fun to speculate why we cannot know but ultimately what does the question affect?
    Secondly people give enormous metaphysical significance to quantum indeterminacy but no-one knows what it “means” People have speculated it is determinacy in a multiverse. The salient point is that it disappears when we consider the number of elementary particles involved in any naturally-occurring phenomenon. If you want to be pedantic it means gasses are only probably going to obey Boyles law. But odds of 10 to the 87 or whatever the number is mean its a sure thing, to all intents and purposes completely determinate.
    If I prefer to consider the universe determinate no-one can prove me wrong any more than I can prove wrong someone who wants to believe the opposite. Why does it matter for economics? Economic propositions are always probabilistic, conditional and limited in time and space. There are no timeless laws. We all agree about that.
    Finally (sorry to go on but I am trying to tidy up misunderstandings) people who complain about economic theory ignoring the full richness of human nature and culture have a point but only up to a point. Let me enter the quote business. Here are two which encapsulate my position and explain why I think economic theory is possible. They are from Herb Simon’s The Sciences of the Artificial
    “… the possibility of building a mathematical theory of a system or of simulating that system does not depend on having an adequate microtheory of the natural laws that govern the system components. Such a microtheory might indeed be simply irrelevant.”
    “This skyhook-skyscraper construction of science from the roof down to the yet unconstructed foundations was possible because the behaviour of the system at each level depended on only a very approximate, simplified, abstracted characterization of the system at the level next beneath. This is lucky, else the safety of bridges and airplanes might depend on the correctness of the “eightfold way” of looking at elementary particles.”
    In other words reality is like Russian dolls. You can study one doll usefully without knowing all about the next one inside. Just as well, otherwise all cooks would have to study biochemistry.

    • December 1, 2020 at 1:38 pm

      I answered this above on November 29th at 9.04 pm, summarising previous comments which mention second order cybernetics. Here you are misunderstanding Keynes’s argument. In terms of information, Knightian uncertainty assumes the information exists but we are unable to decode it. Keynes is saying the evidence of the future doesn’t yet exist. Shannon is saying an extreme version of your first category: you can catch information just as it starts to exist, and if necessary correct one’s aim before your expectation being wrong can do any damage. That’s what doesn’t happen when politicians are assuming their policies are right and trying to make them happen.

    • Meta Capitalism
      December 1, 2020 at 1:50 pm

      Thank for that thoughtful comment Gerald. Keen is making a similar point that theory need not be based on micro-theory. He speaks of emergent levels of reality etc. I am not against theory and never have been. You would really enjoy Weinert’s book. It says some important things about quantum mechanics that few are aware, particular regarding causality and determinism and indeterminism. It also has interesting things to say about models.

    • December 1, 2020 at 4:59 pm

      Thanks for your long and interesting comment, Gerry. I had a similar discussion here on the rwer blog already back in 2014 with Paul Davidson, and maybe that could be of interest to read:

      Keynes and Knight on uncertainty — what’s the difference?

  10. Craig
    December 1, 2020 at 5:30 pm

    The entire issue of uncertainty is basically moot. Change/emergence is an aspect of the temporal universe so….okay.

    The real issue is whether progress is palliative reform or paradigm changing. But even a paradigm change eventually becomes orthodoxy and problematic.

    Now a mega-paradigm/zeitgeist change, the initiator of which Direct and Reciprocal Monetary Gifting is, THAT is really broad and stable progress. There’s only been two, maybe three of those in the entire history of the human species.

    Intellectuals, “Ye must become like little children.” That is, let loose of your burdensome and limiting orthodoxies, even your iconoclasm, and consider the integrative thirdness that emerges out of dualing half truths and falsehoods when a new tool or insight is discovered.

    Please.

  11. Gerald Holtham
    December 1, 2020 at 10:12 pm

    Lars,
    Thanks for the reference. I agree with your statement: “BUT – from a “practical” point of view I have to agree with Taleb. Because if there is no reliable information on the future, whether you talk of epistemological or ontological uncertainty, you can’t calculate probabilities.”

    You then say: The most interesting and far-reaching difference between the epistemological and the ontological view is that if you subscribe to the former, knightian view – as Taleb and “black swan” theorists basically do – you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As both you and Keynes convincingly have argued, that is ontologically just not possible.

    My only quibble is that the word always is wrong. I doubt if Taleb believes that we somehow should always be able to calculate probabilities and I certainly don’t. I will confess to thinking that we should “sometimes” be able to ….. Reading the future comprehensively will always be beyond us but a few people can make a living betting on horses.. Some things will always be in my second category. However, just as we cannot in practice distinguish category two from three, we cannot be sure of the boundary between one and two. Democritus hypothesized the existence of atoms. I am sure he never conceived we should ever be able to see them and describe their movements. But after two and half thousand years of technical advance we can.
    I am not convinced, either, that ergodic and knowable are exactly equivalent. We can know things about non-ergodic systems. But that is another discussion.
    Thanks anyway. I am clearer where we all stand.

    • December 2, 2020 at 12:37 pm

      “Free will is a red herring… Uncertainty is a characteristic of human consciousness”. ~ Gerald Holtham, RWER, 11/30/2020

      “[Lars’s] comment fails to register for some reason. Trying again…..I think there is some genuine disagreement here but I am not sure there is much. There is certainly some misunderstanding”.

      ” For some reason that I do not understand people want to insist on the third sort – randomness and indeterminacy as part of nature. Is that what Lars means by Keynesian uncertainty? … It is in practice impossible to tell category two and three apart. If we cannot know something we cannot tell whether it is random or not. What practical implications does the distinction have?”

      Gerald cannot know whether free will is a red herring when he has not bothered to read Shannon’s masterpiece and so grasp the easily recognised difference between what is expected (given what we trying to do) and what is probable. Nor has he read (or in any case responded) to what I wrote earlier about Shannon and his own first category. Of course he does not understand what he doesn’t know about’

      The practical implication of Shannon for the programmer or politician trying to define the expected is that, he needs to understood, he has a choice between his program allowing for the unexpected; ignoring it; repeating the action to see that had gone wrong; or correcting the data so the program can plough on regardless (or in degraded mode). The tragedy is that guys who don’t know this, but have learned from elites how to deflect criticism by ignoring or misrepresenting it, are given more credence than those who have more idea of what they are talking about.

      Logic and truth. Let me reiterate that what Keynes, von Neumann and Shannon all recognised and most academics still don’t is that ontological (real-life) logic is temporal (time-ordered) and interactive, whereas epistemic (representational) logic is not. What very few people seem to realise is that it is “word logic” which is capable of being true – but only if its rules are followed. If a word logic proposition is true then subsets or instances of it are also true provided the meaning of the words used hasn’t been changed during the course of the argument. What in real life isn’t true is that what is true of one is true of all. Even economists who ought to (and perhaps do) know better continue to deceive us and perhaps themselves with the fallacy of aggregation. The logical foundations of Micro are Macro, not vice versa.

      I want to agree with Gerald (and Lars, and Craig, and a Larry and Meta and Ikonoclast who still appear to be looking) so we can move on, but I don’t think Gerald should be allowed to have the last word on this while he’s so eloquently obfuscating.

      • Craig
        December 2, 2020 at 7:04 pm

        Dave,

        What you and others say is all perfectly correct. The only thing it leaves out is the possibility (and historical evidence for) of a dynamic integrative thirdness greater oneness that both underlies and is the natural impetus for evolutionary change.

        In essence everyone here is talking about, not of, the wisdom/paradigmatic level of inquiry and analysis, which being the ultimate integrative phenomenon (a single concept that transforms an entire complexity/pattern, i.e. the integration of the mental and the temporal) ….is precisely what everyone here would like to see happen in economic theory and the economy proper.

        I also recognize that this a total conversation-stopping viewpoint for the chattering classes whose habituation to problems (point-counterpoint, obsessive dualism) but sometime one needs to “become like little children” and come into present time about these matters…so we can all get on with the business of better survival.

        Sorry.

      • December 3, 2020 at 10:14 am

        Craig, I don’t know what you are sorry about, unless it is my having become disappointed by the “total conversation-stopping view point of the chattering classes” here. I could have used baby talk and spoken of the “dynamic integrative thirdness greater oneness that both underlies and is the natural impetus for evolutionary change” as God, or more explicitly the trinitarian God; but that would have been taken as a conversation-stopper by the self-worshippers that I have been trying not to talk down to. Don’t lump me with the others about being “perfectly correct”: we differ at the philosophical level. I’m prepared to accept the possibility that God exists and the more-or-less historical evidence that confirms this; they have accepted that he can’t and rejected the history as fiction. Trying to stay in the conversation, perhaps I am ” talking about, not of, the wisdom/paradigmatic level of inquiry and analysis … [which] is precisely what everyone here would like to see happen in economic theory and the economy proper”. What I am thinking of, though, is the grace of God the Father expressing his lonely love by his energy evolving into us, his children, the concept transforming this energy being first our absence and then today’s story: the prodigal son’s absence as his lack of due gratitude fails to reflect the love of his Father.

        So I’ve failed to deliver the ultimate challenge. It’s me that needs to be sorry. Thanks for the warning, Craig.

      • Craig
        December 3, 2020 at 6:09 pm

        My use of sorry was actually lightly sarcastic number one, and was aimed at the apparently scientistic and/or otherwise non-comprehenders of the importance of paradigmatic analysis here, not yourself who is at least open to wisdom. I couldn’t care less whether one attributes wisdom/paradigmatic analysis to God or simply to the most underlying nature of the cosmos….only that they try to think through the implications and recognize how utilizing such a mindset has historically been the route to increased knowledge, REAL progress and better survival.

        Of course I’m “Johnny one note”, but at least I’m Johnny one note about what the major heterodox reformers are all in agreement about, that is, that the problem revolves around money, debt and banks. I’m just trying to get the otherwise intelligent posters here to focus on THE PARADIGM OF MONEY, DEBT AND BANKS …..instead of yammering on endlessly about lesser and/or ALREADY AGREED UPON assumption problems in economic theory.

        To all of you who do not reply to my posts: A troll can be both a bother to the analyst….and a prophet you know.

      • Craig
        December 3, 2020 at 8:18 pm

        Indeed, as Steve Keen says in a recent thread, economists and economic pundits have no ears.

  12. December 2, 2020 at 3:12 pm

    David Attenborough says; “Curb excess capitalism’ to save nature”.

    https://www.bbc.co.uk/news/science-environment-54268038

    “Sir David said when we help the natural world, it becomes a better place for everyone and in the past, when we lived closer to nature, the planet was a “working eco-system in which everybody had a share”.

    The Secretary of the United Nations agrees with him:

    https://www.bbc.co.uk/news/science-environment-55147647

    Here’s what I suspect is relevant to what Gerard Holtham has been saying:

    https://www.bbc.co.uk/news/stories-53640382

    “Marty Hoffert was one of the first scientists to create a model which predicted the effects of man-made climate change. … Hoffert shared his predictions with his managers, showing them what might happen if we continued burning fossil fuels in our cars, trucks and planes. But he noticed a clash between Exxon’s own findings, and public statements made by company bosses, such as the then chief executive Lee Raymond, who said that “currently, the scientific evidence is inconclusive as to whether human activities are having a significant effect on the global climate”.

    “They were saying things that were contradicting their own world-class research groups,” said Hoffert. … “What they did was immoral. They spread doubt about the dangers of climate change when their own researchers were confirming how serious a threat it was.”

    The article cites parallels from the tobacco industry. “We asked Hill and Knowlton about its work for the tobacco companies, but it did not respond”. In a statement, ExxonMobil told the BBC that “allegations about the company’s climate research are inaccurate and deliberately misleading”. … But academics like David Michaels fear the use of uncertainty in the past to confuse the public and undermine science has contributed to a dangerous erosion of trust in facts and experts across the globe today, far beyond climate science or the dangers of tobacco. He cites public attitudes to modern issues like the safety of 5G, vaccinations – and coronavirus. [This ignores, of course, dangers like physically safe 5G continuing the dumbing down of our future generations].

    “By cynically manipulating and distorting scientific evidence, the manufacturers of doubt have seeded in much of the public a cynicism about science, making it far more difficult to convince people that science provides useful – in some cases, vitally important – information”.

    That seems to be what has happened on this site. Differences are ignored, not explored.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.