Home > Uncategorized > On the use of logic and mathematics in economics

On the use of logic and mathematics in economics

from Lars Syll

1200-453314475-deductive-reasoning-example-4Logic, n. The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding. The basic of logic is the syllogism, consisting of a major and a minor premise and a conclusion – thus:

Major Premise: Sixty men can do a piece of work sixty times as quickly as one man.

Minor Premise: One man can dig a post-hole in sixty seconds; Therefore-
Conclusion: Sixty men can dig a post-hole in one second.

This may be called syllogism arithmetical, in which, by combining logic and mathematics, we obtain a double certainty and are twice blessed.

Ambrose Bierce The Unabridged Devil’s Dictionary

In mainstream economics, both logic and mathematics are used extensively. And most mainstream economists sure look upon themselves as “twice blessed.”

Is there any scientific ground for that blessedness? None whatsoever!

If scientific progress in economics lies in our ability to tell ‘better and better stories’ one would, of course, expect economics journals being filled with articles supporting the stories with empirical evidence confirming the predictions. However, the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though explicit discussion, argumentation and justification on the subject aren’t considered to be required.

In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in modern economics. Applying it to real-world open systems immediately proves it to be excessively narrow and hopelessly irrelevant. Both the confirmatory and explanatory ilk of hypothetico-deductive reasoning fails since there is no way you can relevantly analyse confirmation or explanation as a purely logical relation between hypothesis and evidence or between law-like rules and explananda. In science, we argue and try to substantiate our beliefs and hypotheses with reliable evidence. Propositional and predicate deductive logic, on the other hand, is not about reliability, but the validity of the conclusions given that the premises are true.

Deduction — and the inferences that go with it — is an example of ‘explicative reasoning,’ where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning. But it is also its limitation since truth in the deductive context does not refer to a real-world ontology (only relating propositions as true or false within a formal-logic system) and as an argument scheme is totally non-ampliative — the output of the analysis is nothing else than the input.

If the ultimate criterion of success of a model is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in mathematical models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economies.

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless (as already Charles S. Peirce argued more than a century ago) a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

64800990Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world all evidence has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to math or logic, not science.

  1. March 14, 2021 at 10:05 pm

    Interesting argumentation. Reminded me of a quote by John Wilder Tukey which I used as a motto in my 1974 doctoral thesis and which read: “Data analysis must seek for scope and usefullness rather than security and be willing to err moderately often in order that inadequate evidence leads into the rigth direction”. Then he quotes Martin Wilk with “The hallmark of good science is that it uses models and “theories” but never believes them” (John W. Tukey, The future of data analysis, Ann. Math. Statist. Vol. 33, pp. 1-67, p.7.).

    • A.J. Sutter
      March 15, 2021 at 3:24 am

      @Dr. Bardy Unfortunately, the quotes you mention are broad enough that many economists might agree. Here’s an example from a leading graduate textbook: David Romer (2012) Advanced Macroeconomics. 4th ed. New York: McGraw-Hill Education, at 14:

      [T]he purpose of a model is not to be realistic. After all, we already possess a model that is completely realistic – the world itself. The problem with that “model” is that it is too complicated to understand. A model’s purpose is to provide insights about particular features of the world. If a simplifying assumption causes the model to give incorrect answers to the questions it is being used to address, then that lack of realism may be a defect. (Even then, the simplification – by showing clearly the consequences of those features of the world in an idealized setting – may be a useful reference point.) If the simplification does not cause the model to provide incorrect answers to the questions it is being used to address, however, then the lack of realism is a virtue; by isolating the effect of interest more clearly, the simplification makes it easier to understand. [Emphasis in original]

      I believe Romer could claim that this passage is entirely consistent with the dicta of Profs. Tukey and Wilk. Nonetheless there are some problems with Romer’s argument. #1, he doesn’t talk about how one appraches a model of the real world after “Isolating effects of interest”; simple aggregation won’t do if there are interactions, especially nonlinear ones, among those effects. #2 and more importantly, a model’s ability to answer the qustions it’s being used to address is worthless, if the questions being asked are the wrong ones.

    • March 15, 2021 at 9:17 pm

      Tukey also says “It is not sufficient to start with what it is supposed to be desired
      to estimate, and to study how well an estimator succeeds in doing this. We must
      give even more attention to starting with an estimator and discovering what is a
      reasonable estimand, to discovering what is it reasonable to think of the estimator as estimating.”

      It seems to me that much economic practice can be criticised for the claims it seems to make for estimators and estimands, but that rather than ban the use of the techniques used, one might instead seek to be clearer about what it is that is being estimated.

      Much conventional economics seems reasonable as a description of how things are and how things may reasonably be expected to be in some limited sense. Problems arise when economies are faced with novel situations (such as pandemics).

      Tukey also makes a distinction between ‘conclusions’ and ‘indications’. (Deductive) logic and mathematics can be used for either, but it is important not to confuse the two, as mainstream economic soften seems to .Myself, I’m not clear why we would need a concept of ‘abduction’ if we understood the logic better. But maybe I’m missing something.

  2. March 15, 2021 at 12:00 pm

    As a mathematician, it seems to me that the use of mathematics in economics is often misinterpreted due to a lack of appreciation of its proper role.

    Lars’ ‘In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent’ is sort of kind of true, but may need clarifying.

    In science, one is never justified in claiming ‘q’, so nothing can be unconditionally inferred from it. What one may be able to claim is that “q ‘fits’ a certain body of evidence”, in which case one can infer that if p ‘explains’ q without making any further restrictions on what is observable, then “p ‘fits’ the same body of evidence”. Now, for reasonable notions of ‘fit’, ‘explains’ etc. this can be a straightforward deduction.

    Sometimes people do use the kind of abduction Lars suggests, without being too concerned about the logic. Isn’t this what some refer to as pseudo-science, often underpinned by pseudo-mathematics? My own view is that in such circumstances it is always helpful to try to construct an explicit deductive argument, and to draw attention to any gaps (as Lars does for neoclassical economics).

    Thus my own view is that ‘proper’ mathematical modelling is essential to any credible economics. I also agree AJS that even an obviously wrong model may be useful as a reference. I guess the main thing is not to confuse the various purposes, as some economists seem to.(And not just economists!)

    • A.J. Sutter
      March 16, 2021 at 3:25 am

      @Dave Marsay Thanks for your response. My formation was in physics, the discipline that gave us “Consider a spherical cow ….” So far be it from me to condemn all obviously wrong models. But the uses of obviously wrong models in physics tend to be different from those in economics as currently practiced.

      First of all, nonlinearity and emergent phenomena are well-enough appreciated among physicists that practitioners are more wary of the fallacy of composition (idea that combination of “isolated” effects will give the big picture, as suggested in the Romer passage I quoted).

      Second, observations always take primacy over Euclidean-style demonstrations. Obviously wrong models that turn out not to be any good for predictions in physics get chucked, whereas in economics they get defended, and/or earn their proponents tenuure, endowed chairs, etc.

      Third, the empirical testing of obviously wrong physics models often occurs under conditions that are reasonably reproducible, whereas the nature of economics and other social sciences is more historical, making reproducibility far more difficult to achieve within natural science-like tolerances.

      And finally, obviously wrong models in physics are far less often used as the basis for policy recommendations and actions than are the obviously wrong models propounded by economists.

      Moreover economics has an even more fundamental problem than its abuse of obviously wrong models: some key assumptions those models are based on, such as “equilibrium,” were imported into the field as articles of faith, and not developed throughy observation of economic behavior. Mirowski’s More Heat Than Light (1989) is the classic source for this history.

      This isn’t to say, by the way, that economics (in a broad sense, not just the mainstream sense) and other social sciences will never be valuable unless they can meet the standards of natural sciences in every respect. Rather, they can be appreciated for what they are, for their limitations and for what they can contribute, without claiming the authority of natural science. Policy decisions do have to be made every day, after all. But the same can be said of the humanities. The US State Department used to have a group, including the Secretary of State, that looked to literature for insights in how to deal with the country’s allies and adversaries. Personally I would be happier with policy-makers who are more deeply-versed in Homer, Aeschylus and Shakespeare than in the work of Gary Becker, Paul Romer or Milton Friedman.

      • March 16, 2021 at 7:59 am

        Great comments. Thanks, Andrew!

      • March 16, 2021 at 5:36 pm

        Moreover economics has an even more fundamental problem than its abuse of obviously wrong models: some key assumptions those models are based on, such as “equilibrium.” (S.J Sutter on March 16, 2021 at 3:25 am)

        I strongly support Sutter. What a new economics needs is (A) to abandon the equilibrium framework, and (B) seek a new analytical framework that can replace it.

        Many heterodox economists hesitate (with reasons) to abandon the equilibrium framework, because they believe we lose major framework of analysis. However, as far as we stay in it, we cannot discover a new framework. Moreover, in reality, a new framework exists already since 1920’s. It is process analysis (it has various names such as step-by-step method, sequence analysis, etc.) It may be not very humble to say this, but we have succeeded in (A) and (B). See Marc Lavoie’s book review.

        Let me quote the latter half of the first paragraph of the book review.

        Their goal is to show why it is that a market economy “works”, that is, how a fully decentralized input-output production economy will not be driven to chaos, and thus will converge to a stable quantity structure, despite agents behaving in a realistic way and having only access to local information, and despite prices remaining fully fixed. Their hope is that these alternative foundations will convince more economists, mainstream or even some heterodox ones, to abandon approaches based on constrained optimization, demand-determined (flexible) prices, and totally unrealistic assumptions, such as the tâtonnement process, the centralization of information, the lack of inventories, and the “all at once” solution.

  3. March 15, 2021 at 5:44 pm

    Induction and abduction are not necessarily considered fallacies by all logicians. Inductive arguments (using induction in the classic sense here) is simply more fallible than deduction. This begs the question of how the major premise is established. Though Pierce early on had a more or less standard definition of induction (going from multiple observations of the consequent to the antecedent), his final definition was a bit idiosyncratic. Pierce uses abduction as a means of drawing inferences based on observation (30% of my sample are red balls, therefore, 30% of the balls in the urn are red). Pierce did not argue though that this was sufficient to establish the validity of the consequent: for that one needed to test hypotheses and Pierce termed this process of testing “induction”. If after further sampling, I continue to draw 30% red balls, I might say my hypothesis is confirmed. The problem in the social sciences of course is that if you are sampling from an urn where the percentage of red balls is changing, as happens in systems that are evolving, you can’t have a reliable probability estimate and I should expect that many empirical generalizations will break down (e.g. there is no reason to expect that every 2% change in unemployment is the result of a 4% change in GDP). However, one can still posit a general relationship between changes in unemployment and GDP, given certain kinds of institutional structures. Just because mainstream economists do not correctly employ covering law explanations does not mean one can or should abandon modus ponens. Rather, the thing is to engage in ongoing good faith inquiry through the process of abduction, deduction and induction to arrive at varying degrees of warrant that a general principle is true or false. You also raise an interesting issue about logic: I do tend to agree that it is regulative in reasoning. There is another tradition that says that the laws of logic are actually the most general empirical generalizations of experience.

  4. Ikonoclast
    March 16, 2021 at 12:13 am

    Point 1. Although the included cartoons in Lars Syll’s posts are often amusing, the one is this post is problematic. A cartoon of a man “bitch-slapping” (ie. abusing) a woman is not acceptable. I think we all know that and I think Lars did not intend any wrong message. Nevertheless, we need to be careful. Right now in the UK and Australia we have large demonstrations by women saying that women’s lives and persons matter: demonstrations provoked by murders, rapes and police violence against women combined with official and elite denial and disinterest in the rights of women.

    Point 2.

    Lars refers to law-like rules. He means axioms, I believe.

    Capitalism is proving incapable of meeting our modern problems and crises. Indeed, capitalism is creating our problems and crises. The corollaries of capitalism are attempted endless growth, resource depletion, environmental destruction, over-consumption and the unnecessary immiseration of millions. The Marxist Immiseration thesis definitely holds up.

    “In Marxist theory and Marxian economics, the immiseration thesis, also referred to as emiseration thesis, is derived from Karl Marx’s analysis of economic development in capitalism, implying that the nature of capitalist production stabilizes real wages, reducing wage growth relative to total value creation in the economy, leading to the increasing power of capital in society.” – Wikipedia.

    Is this not exactly what we see today, since the 1970s? We see stabilized, that is stagnant, real wages. We see reduced wage growth relative to total value creation in the economy. We see the increasing power of capital in society. All exactly what Marx predicted or was predictable from his basic thesis. How could these predictions be right? Marxists have no crystal ball and and the political economy or socioeconomic system is a chaotic [1] and even indeterminate emergent system. How is such prediction possible?

    What Marx realized essentially was that he was dealing with (in capitalism) a prescriptive axiomatic system. What do I mean by a prescriptive axiomatic system? Humans, in their attempts to grapple rationally and logically with the world and society, create axiomatic systems which permit them to apply heuristics (rules of thumb) to manage systems. An early example was and is Euclidean Geometry. Euclidean Geometry is what I would term a descriptive axiomatic system. It is descriptive in the sense that it attempts to “describe” 2D space, 2D topographies, with fundamental axioms sometimes termed The Axioms of Euclidean Plane Geometry. Euclidean Geometry has an empirical foundation in real space idealized as flat space. This idealization process begins the “journey” from description to prescription. This example illustrates that there are no purely empirical axioms. Even axioms as relatively simple as those of Euclidean Geometry and founded in the real have already begun the journey from descriptive of the real to prescriptive of how we are going to deal neatly with the real in practical, pragmatic, customary and legal terms. Example: sloping land is still surveyed as a flat plane for the purposes of determining property area.

    Something similar to Euclidean axiomatization, in a sense, was attempted with capitalism by its pragmatists, theorists and promulgators. Property, production, distribution and consumption (insofar as they are economic activities) were theorized by axiomatization from real phenomena. However, these “real phenomena” are far less uncomplicatedly and objectively real, in many senses, than a flat plain of land idealized to a flat plane. The idealization of property, as an axiom, was particularly prone to what we may term “prescriptive drift”. That is, the idealization, definition and instantiation of property as an ontological object category takes property a long way from what we can term a clearly descriptive category in the objective realist sense. It moves property to a very legalistically and power-defined prescriptive object category.

    What does this mean in terms of outcomes? It means essentially that property becomes formal and not real in certain of its manipulations of ownership and valuation. The real and the formal are (partially) sundered in this sense. A person can formally own land and have this ownership backed up by force of law and ultimately the state’s monopoly on legitimated violence. Ownership of property means the ability to exclude others from it and to retain or dispose of it (via money transactions in the main although gifting and forgiving is possible). Ownership has to be backed by law and policing, otherwise squatting and theft occur. The formal and the real are linked by legal law (formal) and force (real) but this formal-real linking is only partial and imperfect. This is why I said the real and the formal are partially sundered by ownership. The parts of the real truly sundered by ownership (disposal and exclusion rights) are the environment and property-less people. Bad things happen to the sundered real when the axioms of capitalist property are applied: the environment is damaged, perhaps even destroyed, and property-less people are kicked out into the cold.

    This is where we discover that the axioms of capitalism, especially the axioms of ownership, are poorly aligned with the good stewardship of the whole real, which must include the entire biosphere and all people. Marx, the better Marxist theorists and others (the few who actually fully understood his work [2]) realized in essence that what they were dealing with was a formal axiomatized system. Capitalism, especially as a set of ownership and capitalization rules, is sufficiently removed from the real such that it behaves as a fully formal system UNTIL one of more of its axioms reach, by tendency, a real asymptotic limit. This limit is the the point of approach to the asymptote which violates the real fundamentally. This consequentially means the axiomatically prescribed operations of the capitalist system break down at that point as the prescriptive (formal) in capitalism reaches its always immanent clash with the real.

    Note 1. “Chaotic” in the sense of chaos theory.
    Note 2. See the work of John Bellamy Foster plus the work of Shimshon Bichler and Jonathan Nitzan. The latter two researchers are not Marxists. There are several roads to understanding capitalism’s intrinsic, unavoidable and ultimately catastrophic problems which are due to its doctrinaire formal prescriptive nature which ignores real systems. We can include the theorising of Marx and Engels, Veblen, Bichler and Nitzan and indeed of complex systems theorists and econophysicists as roads of approach to understanding these issues.

    • A.J. Sutter
      March 16, 2021 at 2:30 am

      Actually, the problematic image is of Batman (male) slapping Robin (male), his sidekick and apprentice. This is not to justify the use of physical aggression in pedagogy, be it in the manner of Zen masters, Justice League members, or otherwise. But it is a different issue from the one you mention.

      • Ikonoclast
        March 16, 2021 at 7:01 am

        My mistake. I mis-perceived the image. Nevertheless, use or depiction of violence in pedagogy or rhetoric (without redeeming artistic or social value) is to be avoided, IMHO. The trivializing of violence against the relatively weak is one of the techniques of the oppressive elites. Best not to copy their behaviors, again IMHO. But no more from me on this score or I will begin to sound too sanctimonious.

      • Meta Capitalism
        March 16, 2021 at 12:31 pm

        When someone bitch slaps you on the right cheek,
        turn the other as well. ~ Itinerant Carpenter

      • Craig
        March 16, 2021 at 3:37 pm

        The purpose of a zen master striking a noviate is not simply violence. Its intention is really just an oppositional method from meditation/simply looking in hopes of bringing he/she’s attention-consciousness into the present moment which is where that simple natural integration known as satori exists and can only be experienced. Satori is another word for the state of grace that is the actual state of the cosmos where all of the falderal of human mental opposition is finally perceived as both illusion and necessary…as one cannot truly experience truth without also experiencing fallacy and yet a thorough integration of opposites/duality is not merely some kind of obliteration, but a thirdness greater oneness of truth.

        Grace, properly experienced and understood is always relevant, resolving and unitary. As with consciousness raising, so with economic theorizing and policy formation.

  5. March 16, 2021 at 2:58 pm

    The following is a part of my comment on a draft of Jocsef Moczar’s Chapter 8 Bourbaki Mathematics and Debreu’s Axiomatic Economic Theory with Mother Structure, which is uploaded in ResearchGate:
    https://www.researchgate.net/publication/349898388_Chapter_8_Bourbaki_Mathematics_and_Debreu's_Axiomatic_Economic_Theory_with_Mother_Structure

    The first quotation is concerned with the third aspect of mathematical crisis in the end of the 19th century and why Gerad Debreu is wrong. The second quotation explains three aspects of the crisis and the third quotation explains why understanding the responses of mathematics is crucial to know the nature of modern mathematics.

    One crucial error of Debreu is that he did not understand the third aspect of mathematical crisis in the end of the 19th century. In his presidential address to the annual meeting of the American Economic Association in December 1990, he stated as follows:

    The benefits of that special relationship were large for both fields; but physics did not completely surrender to the embrace of mathematics and to its inherent compulsion toward logical rigor. The experimental results and the factual observations that are at the basis of physics, and which provide a constant check on its theoretical constructions, occasionally led its bold reasonings to violate knowingly the canons of mathematical deduction.

    In these directions, economic theory could not follow the role model offered by physical theory. Next to the most sumptuous scientific tool of physics, the Superconducting Super Collider whose construction cost is estimated to be on the order of $10″‘ (David P. Hamilton, 1990; see also Science, 5 October 19901, the experiments of economics look excessively frugal. Being denied a sufficiently secure experimental base, economic theory has to adhere to the rules of logical discourse and must renounce the facility of internal inconsistency.

    In this quote Debreu is right in all sentences except the last sentence. It is true that experiments in economics are often very difficult. It is also true that economics must “adhere to the rule of logical discourse and must renounce the facility of internal inconsistency”. However, then, how are the relations between real world (the economy in out there) and the theory (abstract economy) are established? The fact that an axiomatic system is logically consistent does not prove that the axiomatic system with the terms like resources, prices, efficient use, profits and others correspond in any sense to the real world. Just like non-Euclidean geometry was discovered with the same axioms excepting the parallel axiom, it is possible that the ideally constructed system of Debreu may represent a world that is different from the actual economy. Debreu is simply violating the lessons that formalism and Bourbaki learned with non-Eudlidean geometry. Why did he forget this simple mathematical philosophy? This proves in my opinion that Debreu was not a “well-trained mathematician forged in Bourbaki model”.

    Corry, Weintraub and Mirowski (W&M hereafter), and you talk about formalism and structure but do not explain well why Bourbaki was so influential and successful. There was a crisis of mathematics at around the turn to the 20th century by three reasons: The first and the biggest was the discovery of paradoxes in the (native) set theory: Burali-Forti, Zermelo and finally Russell’s paradoxes. This was a serious problem for all mathematics, because whole construction of mathematics may fall down if something was done to eliminate those contradictions. The second aspect of the crisis was the risk of disassembling collapse of mathematics into various independent fields of researches. The third was not a crisis in a proper meaning of crisis, but had shaken the firm belief in mathematics that continued from the time of classic Greece. It was the discovery of non-Euclidean geometry. It required a big turn in how to interpret axiomatic system, because two (or more) different systems of axioms exist and are possible. If you do not explain these three aspects of mathematical crisis at the turn of the century, readers cannot understand why “Bourbakism” emerged and what kind of significance it had for the mathematics of the second half of the 20th century.

    What is not often told about Bourbaki is that it was a response of the third aspect of foundational crisis of mathematics at the end of the 19th century, i.e. significance of non-Euclidean geometry. This was not the simple discovery story. It required Copernican turn on mathematical truth. Non-Euclidean geometry itself was discovered by Nikolai Lobachevsky who died in the mid 19th century. If there were told that a second systems of geometry was discovered, common people would ask which one is correct, meaning which is true as the geometry of our world or universe. However, the crisis at the end of the 19th century was not this question. It was revealed (by Henry Poincaré and others) that two systems are true in the meaning that one (e.g. Lobachevesky geometry) is logically consistent if and only if the other (e.g. Euclidean geometry) is logically consistent. This can be done, for example, by embedding Euclidean geometry world in the Lobachevsky geometry world. In the two dimensional case, a model of a Lobachevsky geometry is given by a circular disk in a plane (as whole plane), its interior points (a points) and circular arcs both ends of which cut the boundary circle perpendicularly (as straight lines). Hilbert used the similar method when he proved in his Grundlage der Geometrie that his system of axioms is consistent and complete as long as the field of real numbers is consistent. In a sense, he embedded his axiomatic system in the three dimensional space of real numbers.

    This was a Copernican revolution in mathematics, because it broke down the intrinsic connection between mathematics and the real world. It is true that modern mathematics since the 18th century developed by its close relationship with modern physics. Analytical mechanics was nothing other than mathematics and helped much the understanding of the logic of Newtonian mechanics and contributed to expand application range of the latter. The non-Euclidean geometry broke down this connection between the “real” world and the world of mathematics. This was a great change of the world view for mathematicians, but this turn was for non mathematicians more difficult than the switch from geocentric to heliocentric system.

    • March 17, 2021 at 12:51 pm

      What I take from Yoshinori is that many economists don’t really ‘get’ mathematics per se, but have some long discredited view of it, and that this is a problem for economics and for real life. A.J.S. has given us a list of things that economists often seem to think are problems with logic and mathematics, but which it seems to me are better explained by Yoshinori. Hence my view ‘on the use of logic and mathematics in economics’ remains that these are powerful tools that would be useful to economists, if only they were used aright.

      Debreu, does though, raise an interesting further point.

      In any empirical theory, such as physics, one can reach an impasse in which one fails to find a consistent theory to cover the subjects of interest, and ends up with, for example, inconsistencies between theories about large and small scales. At this point, one may reasonably, as physicists do, proceed with caution to apply one’s ideas despite their logical flaws. Neoclassical economists may seem to be following physicists lead here. But they seem to lack proper caution, and sometimes seem to raise their half-baked ideas to dogma, whereas physicists actively seek to improve their theories, employing logic and mathematics in the process. It seems to me that some brave economists ought to be trying to do the same in their ‘use of logic and mathematics in economics’, if we are to get out of current theory’s dismal bind. Meanwhile, we can perhaps follow Tukey in taking much familiar theory as ‘indicative’ rather than conclusive.

      • A.J. Sutter
        March 17, 2021 at 2:24 pm

        @ Dave: Sorry, I don’t follow your comment, “A.J.S. has given us a list of things that economists often seem to think are problems with logic and mathematics.” I can’t reconcile that description with the context or content of my list.

        My list was in response to your defense of obviously wrong models (OWMs). After acknowledging that economists aren’t the only ones who use OWMs, my first point was that the manner in which economists use their OWMs is distinguishable from how natural scientists use their OWMs. And my second was that economists’ OWMs more often have adverse results for society than do natural scientists’ OWMs. The views of economists about logic etc. didn’t enter into it.

      • March 17, 2021 at 3:13 pm

        My apologies. I have to admit I struggle to comprehend the way that economists talk about mathematics and logic, and I may be further confused about bringing in physics.

        A well known saying is that “all models are wrong, but some are useful”. We should add ‘useful for some purposes”, as you do. It seems to me that some physicists sometimes forget this caveat, but other physicists generally challenge such mistakes in a way that mainstream economists haven’t. We seem to agree on this.

        The problem I have with terms like ‘best explanation’ and ‘obviously wrong’ is that they are subjective. To me, the problem with economic models is not that they are ‘obviously’ wrong, but that are sometimes used when they are not only not ‘useful’, but can be dangerously misleading.

        In your first point you mention ‘non-linearity and emergent phenomena’. Mainstream economics is ‘excessively narrow’ in its use of mathematics (as Lars says): but why not follow physicists in using more appropriate mathematics? Similarly with your point about equilibrium.

        Your other points seem to me to be likening economists to astrologers who point to their use of sophisticated mathematics as if it somehow justified their views. A line of argument is only as good as the weakest link. Lars’ view on deduction versus abduction seems to me a distraction.

      • A.J. Sutter
        March 17, 2021 at 4:35 pm

        Thanks. A couple of points:

        1. My comment about nonlinearity and emergence isn’t a comment about mathematics. It reflects more of a philosophical attitude, recognizing that bigger systems often have something special about them that you can’t see from smaller components. E.g., most chemistry is based on Pauli’s exclusion principle (two electrons or other fermions can’t occupy the same quantum state in the same quantum system at the same time), but you can’t derive that principle from just one electron, even though single electrons are in some sense more “fundamental” than two electrons.

        Therefore, one should be careful the idea of learning about real-world economic phenomena by “isolating” small parts of economic behavior. The point I made about historicity, though, alludes to a challenge facing economists: whereas physicists might be able to study larger systems under conditions of adequate reproducibility, that opportunity is generally denied to economists. Again, that’s not something that can be fixed by “better” or “more appropriate” math. Which is also why I suggested that a better path for economics might be to accept that it isn’t a natural science.

        2. A flaw with the maxim “All models are wrong, but some are useful” is that it doesn’t suggest any criterion for identifying when a model is useful. Such a criterion was suggested in the David Romer passage I quoted in my first comment in this thread (March 15 03:24 am), but that was deficient for the reasons I mentioned in that comment. Mostly the maxim is used by economists as a carte blanche for claiming that their own preferred model is worthwhile, regardless of what fallacies it contains. Viewed another way, the maxim is mainly a tool for hypnotizing students into submission.

        3. Finally, I didn’t really hint at an analogy to astrology, but your comment is perceptive nonetheless: I do indeed use that analogy often in my teaching. Astrology was based on rigorous observation of the positions of celestial bodies in the sky, and in its heyday used state-of-the-art technology (such as the instruments devised by the astrologer Tycho Brahe, whose observations were good enough to allow Kepler to deduce that planetary orbits were non-circular ellipses). Similarly, some economists rely on very sophisticated data sets and collection techniques, too. Thing is, astrologers then make deductions from their observations on the basis of very questionable assumptions; so too many economists. Haruspicy is in principle a more vivid stand-in for astrology in this context, though with fewer students having a classical education these days I usually don’t risk mentioning it. Thanks again.

      • Ikonoclast
        March 17, 2021 at 10:41 pm

        “To communicate with Mars, converse with spirits,
        To report the behaviour of the sea monster,
        Describe the horoscope, haruspicate or scry,
        Observe disease in signatures, evoke
        Biography from the wrinkles of the palm
        And tragedy from fingers; release omens
        By sortilege, or tea leaves, riddle the inevitable
        With playing cards, fiddle with pentagrams
        Or barbituric acids, or dissect
        The recurrent image into pre-conscious terrors—
        To explore the womb, or tomb, or dreams; all these are usual
        Pastimes and drugs, and features of the press:
        And always will be, some of them especially
        When there is distress of nations and perplexity
        Whether on the shores of Asia, or in the Edgware Road.” –

        T.S. Eliot.

  6. March 17, 2021 at 10:08 pm

    AJS: I agree I that pure logic and mathematics on their own don’t fix anything, but (contra to what Lars’ original post might be interpreted as saying) I think making the maximum use of appropriate logic and maths is a good idea in both physics and economics. But we need to avoid being misinterpreted or confused.

    Thank you for taking the time to engage.

  7. A.J. Sutter
    March 18, 2021 at 2:06 am

    @Ikonoklast Thank you for the haruspicy shout-out.

  8. March 18, 2021 at 6:29 am

    Lars Syll seems to be too busy denouncing bad practice of mainstream economists and does not see what mathematicians (mathematical economists in particular) are doing. The followings are three excerpts from Syll’s article “On the use of logic and mathematics in economics.

    Text A

    In mathematics, the deductive-axiomatic method has worked just fine. But science is not mathematics. Conflating those two domains of knowledge has been one of the most fundamental mistakes made in modern economics.

    Text B

    Deduction — and the inferences that go with it — is an example of ‘explicative reasoning,’ where the conclusions we make are already included in the premises. Deductive inferences are purely analytical and it is this truth-preserving nature of deduction that makes it different from all other kinds of reasoning.

    Text C

    If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

    In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

    I agree with him that we should not conflate science and mathematics (Text A). But, it seems to me that Lars Syll does not rightly understand what mathematical research is and what mathematicians are usually doing. He is conflating mathematics and deduction. Deduction (or proof by logical inference) is only a small part of what mathematicians are doing. This may be inevitable, because most of mathematics books and articles are written in this deductive way. This is but a reporting method of their long efforts of trials and errors. Mathematicians and mathematical economists (of true kind and excluding those blind imitators of preceding works) are practically doing what Syll described in Text C.

    If readers have an experience to have worked (possibly in high school) on construction problems in elementary geometry, they must have learned that complete solution is composed typically of four parts or phases: analysis, construction (how to draw), proof, and discussion. Proof occupies one fourth of phases. In proof phase, the examination whether the proof is correctly following a deductive rules is only a small part of thinking, because to find a good proof requires trials and errors.

    But, this example may not be very instructive, because solving a geometric construction problems is now rather a rare experience. So, let me cite an interesting pedagogical experiment by Fulvia Furinghetti and Domingo Paola (2003) that I found by pure chance. The authors, inspired by Herbst’s emphasis that “the proof is intimately connected to the construction of mathematical ideas, … as defining, modeling, representing, or problem solving,” tried to observe how the students (two students in collaboration) construct a part of a theory by setting experimental learning course.

    Nunokawa (1996) has discussed the application of Lakatos’ ideas to mathematical problem solving. In our approach to proof we are thinking something similar. We see students as immersed in a situation similar to that termed by Lakatos (1976) pre-Euclidean, that is to say a situation in which the theoretical frame is not well defined so that one has to look for the ‘convenient’ axioms that allow constructing the theory. The didactical suggestion implicit in Lakatos’ words is that it is advisable to recover the spirit of Greek geometers. When they made proofs they were not inside a theory in which axioms were explicitly declared. Initially antique geometry developed in an empirical way, through a naïve phase of trials and errors: it started from a body of conjectures, after there were mental experiments of control and proving experiments (mainly analysis) without any sure axiomatic system. According to Szabo, this is the original concept of proof held by Greeks, called deiknimi. The deiknimi may be developed in two ways, which correspond to analysis and synthesis. These ideas suggest a way of realizing cognitive continuity in our approach to proof in classroom. Also they suggest the means to reach this objective: socialization, discussion, sharing of ideas.

    What is described here is only a primitive phase of theory making, but well conveys what mathematicians are doing in their research work (not in writing or reading mathematics). It is important to note that axioms are not given here. They are to be discovered. Mathematical economists (or true kind) are working in a similar situation and in a similar way. They have to choose hypotheses that can be a good set of axioms of the new theory they want to construct. This is very difficult work. We normally draw on extant observations such as stylized facts (e.g. Kaldor’s stylized facts), observations on how managers and entrepreneurs behave (Mintzberg, The Nature of Managerial Work, 1973), and questionnaire survey reports (the first of such surveys is the Oxford Economists’ Research Group’s one in late 1930’s).

    Deduction from axioms for mathematical economist is only a small part (or better say, an idiosyncratic aspect) of the whole research, which is an effort to find a coherent system of seemingly established propositions, creating and revising concepts (the meaning of propositions changes with this), and choosing the good range of validity of the theory.

    If we understand the true nature of mathematical economic works, Lars Syll will find that they are very similar to what he calls inference to the best explanation, or what he described in Text C. If he understands this, he must work for enhancing true mathematical economists instead of denouncing them.

    • March 18, 2021 at 6:42 am

      Addendum: Text B is cited to show Lars Syll’s typical image of mathematics or his deductive-axiomatic method.

    • March 18, 2021 at 12:18 pm

      I did start to critique Lars’ writing, drawing on the same quotes as Yoshinori, but couldn’t explain it as well.

      Coincidentally my bed-time reading is Marcus du Sautoy’s ‘the music of the primes’ (reaching page 116). This is usually recommended as an accessible introduction to real mathematics. But I’m finding it insightful as to why some people have such weird ideas about mathematics and science (not just economists).

      Marcus describes how in pre-modern times logic and mathematics were quite different from now, and mutually incompatible. Mathematics was, broadly speaking, a part of physics. The main advocates of this ‘pragmatic’ view were largely French, who resisted the ‘modern’ ideas, largely from Germans, as it happens.

      It seems to me that:
      (1) Classical ‘French’ mathematics can be split into beliefs that are consistent with the modern view and a ‘meta-mathematics’ that is illogical.
      (2) That the illogical part of classical mathematics is very like Keynes’ ‘pseudo-mathematics’.
      (3) That many contemporary critiques of the use of mathematics are valid in so far as they apply to classical mathematics.
      (4) That too often, economists (among others) fail to make this distinction, and can even interpret valid mathematics using invalid classical meta-mathematics.
      (5) That, as in ‘good practice’ in physics (and elsewhere) it is worth trying to use logic and mathematics ‘properly’, unless one really is satisfied with the status quo (which I assume we aren’t).

      Can we throw out the classical meta-mathematical bathwater without losing the mathematical baby that could prove useful?

  9. March 18, 2021 at 3:30 pm

    Dave, thank you for introducing me a seemingly interesting book. More amazing fact is that it is already translated into Japanese and sold in paperback.

    Marcus describes how in pre-modern times logic and mathematics were quite different from now, and mutually incompatible. Mathematics was, broadly speaking, a part of physics. The main advocates of this ‘pragmatic’ view were largely French, who resisted the ‘modern’ ideas, largely from Germans, as it happens.

    This must be a very important point to remind when we talk about mathematics. I did not realize it. Your ideas are also interesting. Let us continue considering on them.

    • March 18, 2021 at 3:33 pm

      Sorry! I have forgotten to put the following part in quotation:

      Marcus describes how in pre-modern times logic and mathematics were quite different from now, and mutually incompatible. Mathematics was, broadly speaking, a part of physics. The main advocates of this ‘pragmatic’ view were largely French, who resisted the ‘modern’ ideas, largely from Germans, as it happens.

    • March 18, 2021 at 6:17 pm

      Actually, my own view is that anyone who has a (to me) logical view of science will use mathematics sensibly, so the point of understanding the difference between different ideas about mathematics is not to teach the ‘good’ scientist (or science-inspired economist) anything, but merely to point out that while critiques like Lars’ are important, its not to say that logic and mathematics per se are not potentially much more useful in a ‘good’ way than many practical people seem to think.

      Actually, as far as I can see in so far as critiques of mathematics are valid, they pretty much reflect the views of ‘modern’ mathematicians, at least as I was taught them. But I don’t claim to follow all the (to me) obscure arguments.

      Incidentally, your own work seemed at first reading very sensible, but maybe I was right to worry that not everyone would read it the same way.

      • A.J. Sutter
        March 19, 2021 at 1:38 pm

        Would someone care to explain how the French vs. German contrast relates to economics? Also, I’m not sure I understand what is “classical” mathematics — does the name suggest there is also a non-classical maths?

        And how do these distinctions work in practice? E.g., I like topology: is that French, or is it German? Is it classical, or non-classical?

        And how does one classify economists who abuse mathematics? E.g., here is a passage from a well-known textbook, Hal Varian’s Intermediate Microeconomics: A Modern Approach, 8th ed. (New York: W. W. Norton 2009), discussing indifference curves for discrete goods, en route to proceeding to the usual diagrammatic exposition with smooth, continuous indifference curves:

        The choice of whether to emphasize the discrete nature of a good or not will depend on our application. If the consumer chooses only one or two units of the good during the time period of our analysis, recognizing the discrete nature of the choice may be important. But if the consumer is choosing 30 or 40 units of the good, then it will probably be convenient to think of this as a continuous good.

        When I was a college freshman, the statement, “The integers are nowhere dense in the reals” seemed rather oracular and hard to understand. But in this context, it pulls the rug out from under the standard, misleading differential calculus-based pedagogy promulgated by generations of textbooks. (Translation for the less mathematically inclined: Imagine integers on the line of real numbers. They are discrete points, with nothing connecting any two of them. Contra to Varian, considering a set with a lot of integers doesn’t do anything to eliminate the fact that they are discrete. And contra his and many other textbooks, you can’t do differential calculus (which, as W.S. Jevons pointed out, involves infinitely small quantities) on functions whose domain (X-coordinate values) is discrete.) So is Varian’s math French mathematics? German? Logical? Illogical?

      • March 19, 2021 at 3:30 pm

        Dave,

        you put it:

        so the point of understanding the difference between different ideas about mathematics is not to teach the ‘good’ scientist (or science-inspired economist) anything

        At the end of my post on March 18, 2021 at 6:29 am Reply, I wrote that Lars Syll “must work for enhancing true mathematical economists instead of denouncing them.” Perhaps this sentence may be better paraphrased as “produce good scientists who work in economics.”

        Dave >

        maybe I was right to worry that not everyone would read it the same way.

        Thank you for your caution. It may not be easy for many people to understand what I am claiming. A typical example must be the “axiomatic method” I employed in Chapter 2 of our book A Large Economic System with Minimally Rational Agents which I wrote.

        I started my argument by presenting 17 postulates that I suppose opportune. This is not to claim that these postulates are self-evident or true without arguments. Lars Syll’s expression “deductive-axiomatic method” has a connotation that axioms are true sentences without any examination of or reference to the real world. I used the term postulate and avoided the term axiom, because I thought this kind of misunderstanding is stronger for axiom than to postulate. Before introducing the set of postulates, I made several remarks in one of which I put like this:

        In Classical Greece, axioms and postulates are supposed to be either self-evident or plausible without proofs. However, modern mathematics has much looser criteria than self-evidence. It simply asks whether a set of postulates produces a mathematical structure that is interesting to study. For sciences other than mathematics, a set of postulates is preferably consistent and independent. Unlike mathematics, which is an abstract logical entity, economics is an empirical science and has a “real-world”object of its study, i.e., economy. To use the axiomatic method in an empirical science, it is therefore necessary that postulates are also realistic, or true, within a certain range of validity. …

        The reality of postulates is the first thing we should care about. It may not require a long explanation. All of us know what reality is, although we may have different opinions in concrete cases. In this chapter we assume postulates are based on economic laws which are distilled from empirical observations through a long history of argumentation. We can therefore claim that they have strong links to reality. However, to ask for a postulate to be a universal truth is impractical (perhaps impossible), because an economy has such variety that a law covering all cases becomes too complicated, just like a section in some legislative law having very many clauses but which is also full of exceptions. Using a set of such complicated postulates, although very real, does not clarify the logic of how the economy works. It is that logic which is our goal, and so we have to sacrifice such completeness for the sake of tractability and comprehensiveness. Therefore, the right choices regarding the scope of a postulate’s validity are of primary importance.

        We cannot dispense with arguments on each postulate. Although such arguments, directly based upon empirics, are rather rare and difficult to carry out, they are an essential part of economics. This is an aspect of the discipline which has been relatively neglected, and so it is necessary to search for suitably relevant references widely, purposively, and attentively. Even so, detailed discussion of these matters would still require a whole book. We have to reject engaging with that project here, first, due the limits of publishing space and, second, by the limits of our own capabilities.

        I am sorry of this long citation. The notion of range of validity seemed very important although it is seldom talked in economics. To make clear what the range of validity is one of merits of employing axiomatic method. Probably, Lars Syll, if he had read my chapter, did not understand this.

        There is no need to defend axiomatic method when it is used in an extremely wrong way by mainstream economics, but I thought that it is not useless for heterodox economists to know the merits (and defects) of axiomatic methods.

      • March 19, 2021 at 5:21 pm

        A.J. Sutter,

        as Marcus du Sautoy’s book has not yet arrived to my place, I cannot tell about the difference between French and German views.

        As for the terms classical and modern in mathematics, the former roughly points to mathematics before the end of 19th century, whereas the latter points to mathematics in the 20th and 21st centuries. More precise demarcation point is the foundational crisis of mathematics at the turn of century from the 19th to the 20th. For this point, please read my second quotaion in my post above on March 16, 2021 at 2:58 pm.

        The crisis deeply changed the philosophy of mathematics and we may say that majority of the 20th century (pure) mathematics are responses to this crisis.

      • March 20, 2021 at 10:26 am

        Responding to AJS: I agree with Yoshinori’s second response, on the classical / modern distinction, but need to ponder on his first a bit more: maybe there is as substantive difference between us, or only a difference in how we express ourselves. But the second response is ‘good’.

        Yoshinori ends “The crisis deeply changed the philosophy of mathematics and we may say that majority of the 20th century (pure) mathematics are responses to this crisis.” How true! But I guess my big point is that much of the what the general public think of as ‘mathematics’ is ‘old school’ and even when new areas like topology are developed they tend to be thought of in an ‘old school’ way.

        Your quote of Varian is very relevant. Physicists have struggled with this issue. My hope is that they get to a point where they can explain it to the rest of us soon. Marcus, a professor of the public understanding of science is ‘on the case’. Power to his elbow. Meanwhile, many career physicists would prefer to ‘shut up and calculate’, at least for now.

        Classically, mathematics was viewed as an abstraction of space and time, and hence necessarily was linked to ‘reality’. In the modern era mathematics is just about mathematics. It is the job of physics to try to work out the structure of space-time. They can use existing mathematical models as references if they want, but only as references, not as dogma.

        To me, the important distinction is not based on content, but on attitude. Scientists with ‘right’ attitude to science will, I think, not abuse the mathematics in the way that economists often seem to. Even Lars seems to fail to make an appropriate distinction, and so seems to me to misrepresent mathematics. Maybe his teachers had a ‘bad’ attitude?

        I am tempted to say a lot more, but Yoshinori says some things better.

      • March 20, 2021 at 10:36 am

        Yoshinori, re-reading your first reponse to AJS, I still quibble with your ” To use the axiomatic method in an empirical science, it is therefore necessary that postulates are also realistic, or true, within a certain range of validity. ”

        Taken out of context (as it was when I first read it) this seems very ‘classical’, and part of the problem I have with ‘bad science’. Reading on your qualify your statement in such a way that any reasonable (to me) reader will understand. On the other hand, I fear that someone who lacks your insight could easily quote you out of context.

        In your second response to AJS your refer to philosophy. It would be very helpful if someone could re-write the above quote in a way that gets across your essential point without relying on your subsequent clarification. I think Lars is quite right to keep banging on about this as an issue, as are we to keep trying to ‘put him right’. (Good blog!)

  10. A.J. Sutter
    March 21, 2021 at 9:20 am

    @ Dave & Yoshinori: Thanks for your responses. Some replies:

    1. About topology: I introduced this example because of the French/German distinction, which I still don’t get. But since both Riemann and Poincaré laid its foundations, it seems something like a counterexample to the culture war perspective.

    Also, I’m not sure I understand what is meant by “even when new areas like topology are developed they tend to be thought of in an ‘old school’ way” [@Dave]. Does that mean they are thought of in some way as relating to space and time, instead of purely for themselves? One of my teachers in college was the late algebraic topologist Raoul Bott: does the fact that he got interested in topology from training as an electrical engineer make him “old school”? Does the fact that Chern numbers are useful in condensed matter quantum field theory (e.g., in connection with a class of materials known as topological insulators) make S.S. Chern “old school”?

    If the answer is negative in both cases, then what are the boundaries to “old school”? If the answer is affirmative in either case, then what is the purity test one must pass to be “new school”?

    2. Second, apropos of

    Classically, mathematics was viewed as an abstraction of space and time, and hence necessarily was linked to ‘reality’. In the modern era mathematics is just about mathematics.

    I wonder if this might not be an overly broad generalization. For one thing, it might be confusing mathematics with the mathematical profession. Classically, many mathematicians were involved in “practical” problems or were aware of the applications of some of their mathematical activities to physics and nature generally. But was this because they believed all mathematics had practical application? or was it simply because this was expected as part of their job description under the conditions of patronage, university employment, etc. at the time?

    For example, some of Fermat’s work seems to have been motivated by physics, e.g. his principle of least action. But what about his work in Diophantine equations? What about anybody’s work in Diophantine equations? Isn’t it plausible that that was motivated by a fascination with numbers and algebra for their own sake?

    Yes, it may only be since sometime in the 20th Century that mathematicians identified themselves professionally as “pure” or “applied” mathematicians. But this doesn’t mean they never did “pure” mathematics until that time.

    (Cf. the British distinction between solicitors and barristers: in many other jurisdictions, including the ones where I am a member of the bar, one can engage in both types of activities with the same license. That doesn’t mean that one can’t have a preference for, say, contract drafting & negotiation over litigating; nor does it mean that it’s prudent for any particular individual to engage in both types of activity; but neither does it mean that one believes they are both essentially the same activity.)

    3. Most of my reply is about the following:

    Your quote of Varian is very relevant. Physicists have struggled with this issue. My hope is that they get to a point where they can explain it to the rest of us soon. Marcus, a professor of the public understanding of science is ‘on the case’. Power to his elbow. Meanwhile, many career physicists would prefer to ‘shut up and calculate’, at least for now.

    This is a very good point: nature is quantized, just like most economic consumption. And yes, it’s true that many career physicists have an allergy to considering the philosophical foundations of their field. (Aside from philosophers, are there any other professions that are immune to that allergy?)

    But I’d argue this is a false equivalence. The way physicists approach ignoring quantization in many contexts is very distinguishable from how economists go about this.

    A. Physics Treatment:

    The general public made it through most of the 20th Century on the back of classical physics, even though that ignores quantization. The reason was that there were many phenomenological laws, like say Ohm’s and Newton’s laws, that allowed you to predict where a baseball or an artillery shell would land, allowed you to design electric light bulbs and national power grids, etc., all of them reasonably well. Classical physics might not have allowed you to explain the details of how everything worked — electrical conductivity and resistance, for example — but that didn’t prevent you from harnessing a less fundamental theory and getting accurate results in the real world. (Even for lasers, where quantum theory is needed to understand why they work, but not for doing lots of things with the light they emit.) Moreover, classical physics was adequate to let you predict when some stuff wouldn’t work: e.g., the theory of the ether, or if an electrical machine wouldn’t work because of inadequate power, etc. Ditto, by the way, for chemistry: even though chemical bonds and the differentiation of the elements are best explained at the quantum scale, there were many things we could reliably and repeatably do with molecules, metallurgy, etc. long before the existence of atoms was confirmed.

    We became aware of quantum phenomena through a series of experiments that were not so closely related to usual life. It turns out that at room temperature and pressure, quantum effects tend to be important only on very small length scales, roughly around 10^(-8) to 10^(-10) meters. (Some macroscopic quantum phenomena, such as Bose-Einstein condensation, need very low temperatures, and others, such as the production of light by the Sun and other stars, need very high temperatures and pressures. The discovery of “cathode rays,” one of the macrophenomena that motivated early quantum theory, required low pressures and high voltages.) Even the original two-slit experiment by Thomas Young in the 19th Century could be explained by classical physics: light behaved as a wave.

    And there are still plenty of unexplained everyday things — some of them quite difficult, such as explaining why drying liquids leave rings, or the behavior of waves in a string that’s got various pieces of cheap hardware suspended from it — where you don’t need quantum physics at all. (Nor, apropos of the SSC, a lot of money: I once studied with someone in the UCLA Physics Department who made the cover of Nature in the 1990s for an experiment whose most expensive piece of equipment was something he bought for about $350, using his MasterCard.)

    B. Economics Treatment:

    Now lest’s consider how quantization is treated in economics. One of the creators of neoclassical economics, W.S. Jevons, began with the a priori notion that to be scientific, economics had to use differential calculus [1]:

    My theory of Economics, however, is purely mathematical in character. Nay, believing that the quantities with which we deal must be subject to continuous variation, I do not hesitate to use the appropriate branch of mathematical science, involving though it does the fearless consideration of infinitely small quantities. The theory consists in applying the differential calculus to the familiar notions of wealth, utility, value, demand, supply, capital, interest, labour, and all the other quantitative notions belonging to the daily operations of industry. As the complete theory of almost every other science involves the use of that calculus, so we cannot have a true theory of Economics without its aid. [1, emphasis added]

    To be fair to Jevons, he paid a lot of attention to dimensional analysis, and explicitly discusses the problem of discrete or “indivisible” goods in Chapter 4 of his Theory of Political Economy, in a separate section called “Failure of the Equations of Exchange.” It’s clear, though, that his emphasis in the book is on commodities ordered in vast bulk, which even though quantized — such as into grains of wheat, leaves of tea, etc. — can be treated as if continuous. This sets a political and social context for his work, which was shared in Alfred Marshall’s Principles of Economics [2], which introduced the now-ubiquitous demand curve diagrams. But unlike the forthright Jevons, when Marshall discusses an individual’s demand for tea purchased by the pound he disposes of the problem by assuming (i) it would be possible to watch how the individual’s utility changed with “infinitesimal variations in his consumption,” and (ii) that this would follow a continuous, downward-sloping demand curve (id., 125-126, 126n). Marshall also believed these problems would disappear by using a market demand curve, made from adding up individuals’ curves — and it was the market that interested him more. (Id., 128.)

    Indifference curves were introduced by the next generation of neoclassicals, among them Pareto and Slutsky. Ultimately this led not only to Varian’s statement quoted above (though contrast his “30 or 40” with Marshall’s infinitesimals) but to the diagram and exposition in the world’s leading introductory economics textbook showing various individuals’ demand for continuous quantities of ice cream cones [3:42-43], and an individual’s indifference for various bundles of Pepsis and pizzas varying continuously [id., 263]. Most students who take a course using that textbook will go on in majors or careers other than economics, so they’ll never learn different points of view.

    C. Comparison

    Classical physics ignored quantization because it was unaware of it in nature. Nonetheless, it’s successful enough in many cases, it was an adequate framework for falsifying (in K. Popper’s sense) certain theories (such as the ether), and it was itself capable of being falsified in cases where it clearly didn’t work (e.g. at atomic scales).

    Neoclassical economics ignores quantization because of an a priori idea that a certain type of math should be applicable to it in order to gain it the prestige of being a science. Its founders’ scruples about using such math in cases where it was empirically inapplicable — one cannot buy infinitesimal quantities of Pepsi or pizza, much less of guns or iPads — were brushed aside or effaced by generation after generation of their followers. So it willfully ignores the pertinent realities of everyday experience: quite the opposite of classical physics.

    Moreover, compared to theories in classical physics, most published theories in economics are neither true nor false (sc., when their performative elements are removed: we neglect cases where everyone, say, deliberately prices options according to some a priori theory: see, e.g., [4], [5]). They’re all toy models. They don’t claim to represent the real world of economics but only to “isolate” certain features of it, although their propounders may attempt to draw conclusions about the real world when it serves their interests to do so.

    In the case of economists like Marshall, who lay the groundwork for dubious aggregation processes in economics, the continuous curves he was drawing represented a quantity — utility — that was not only impossible to measure directly, and impossible to measure in a manner required by his chosen mathematical framework for the individual case (utility of infinitesimal quantities of a commodity), but that also posed numerous other difficulties, such as interpersonal comparison, that have to be overcome in order to achieve the aggregates he used to justify his use of differential calculus. Plus, as Sonnenschein, Mantel and Debreu discovered, even as a purely mathematical exercise those aggregation methods don’t yield a unique demand curve if equilibrium is required. This led to economists adopting the fiction that an economy consists either of only one consumer or of many identical consumers (the “representative agent”). Cf. commentary by Perry et al. [6]

    So, attempting to equate the failings of classical physics with those of neoclassical economics is not at all persuasive.

    REFERENCES

    [1] Jevons, W. [William] Stanley (1888)[1878] The Theory of Political Economy. 3rd ed. Edited by Harriet Jevons. London: Macmillan: p. 3. The text is from the 2nd ed. 1879, pagination from a posthumous 3rd ed. 1888.

    [2] Marshall, Alfred (1920) Principles of Economics: An Introductory Volume. 8th ed. London: Macmillan.

    [3] Mankiw, N. Gregory. (2012) Principles of Economics. 6th ed. Mason, OH: South-Western Cengage Learning.

    [4] MacKenzie, Donald (2008) An Engine, Not a Camera: How Financial Models Shape Markets Cambridge, MA: The MIT Press.

    [5] MacKenzie, Donald et al., eds. (2008) Do Economists Make Markets?: On the Performativity of Economics Princeton: Princeton University Press.

    [6] “You lie like a priceless Persian rug / On a rich man’s floor / Well, you lie like a coon dog basking in the sunshine / On my porch /The way you lie like a penny in the parking lot / At the grocery store / It just comes way too natural to you” http://www.youtube[dot]com[slash]watch?v=pCwLsXZnFl4

  11. A.J. Sutter
    March 21, 2021 at 9:24 am

    [Lars: Please delete prior post with one HTML error — some editing or deleting capability on this platform would be welcome!]

    @ Dave & Yoshinori: Thanks for your responses. Some replies

    1. About topology: I introduced this example because of the French/German distinction, which I still don’t get. But since both Riemann and Poincaré laid its foundations, it seems something like a counterexample to the culture war perspective.

    Also, I’m not sure I understand what is meant by “even when new areas like topology are developed they tend to be thought of in an ‘old school’ way” [@Dave]. Does that mean they are thought of in some way as relating to space and time, instead of purely for themselves? One of my teachers in college was the late algebraic topologist Raoul Bott: does the fact that he got interested in topology from training as an electrical engineer make him “old school”? Does the fact that Chern numbers are useful in condensed matter quantum field theory (e.g., in connection with a class of materials known as topological insulators) make S.S. Chern “old school”?

    If the answer is negative in both cases, then what are the boundaries to “old school”? If the answer is affirmative in either case, then what is the purity test one must pass to be “new school”?

    2. Second, apropos of

    Classically, mathematics was viewed as an abstraction of space and time, and hence necessarily was linked to ‘reality’. In the modern era mathematics is just about mathematics.

    I wonder if this might not be an overly broad generalization. For one thing, it might be confusing mathematics with the mathematical profession. Classically, many mathematicians were involved in “practical” problems or were aware of the applications of some of their mathematical activities to physics and nature generally. But was this because they believed all mathematics had practical application? or was it simply because this was expected as part of their job description under the conditions of patronage, university employment, etc. at the time?

    For example, some of Fermat’s work seems to have been motivated by physics, e.g. his principle of least action. But what about his work in Diophantine equations? What about anybody’s work in Diophantine equations? Isn’t it plausible that that was motivated by a fascination with numbers and algebra for their own sake?

    Yes, it may only be since sometime in the 20th Century that mathematicians identified themselves professionally as “pure” or “applied” mathematicians. But this doesn’t mean they never did “pure” mathematics until that time.

    (Cf. the British distinction between solicitors and barristers: in many other jurisdictions, including the ones where I am a member of the bar, one can engage in both types of activities with the same license. That doesn’t mean that one can’t have a preference for, say, contract drafting & negotiation over litigating; nor does it mean that it’s prudent for any particular individual to engage in both types of activity; but neither does it mean that one believes they are both essentially the same activity.)

    3. Most of my reply is about the following:

    Your quote of Varian is very relevant. Physicists have struggled with this issue. My hope is that they get to a point where they can explain it to the rest of us soon. Marcus, a professor of the public understanding of science is ‘on the case’. Power to his elbow. Meanwhile, many career physicists would prefer to ‘shut up and calculate’, at least for now.

    This is a very good point: nature is quantized, just like most economic consumption. And yes, it’s true that many career physicists have an allergy to considering the philosophical foundations of their field. (Aside from philosophers, are there any other professions that are immune to that allergy?)

    But I’d argue this is a false equivalence. The way physicists approach ignoring quantization in many contexts is very distinguishable from how economists go about this.

    A. Physics Treatment:>/i>

    The general public made it through most of the 20th Century on the back of classical physics, even though that ignores quantization. The reason was that there were many phenomenological laws, like say Ohm’s and Newton’s laws, that allowed you to predict where a baseball or an artillery shell would land, allowed you to design electric light bulbs and national power grids, etc., all of them reasonably well. Classical physics might not have allowed you to explain the details of how everything worked — electrical conductivity and resistance, for example — but that didn’t prevent you from harnessing a less fundamental theory and getting accurate results in the real world. (Even for lasers, where quantum theory is needed to understand why they work, but not for doing lots of things with the light they emit.) Moreover, classical physics was adequate to let you predict when some stuff wouldn’t work: e.g., the theory of the ether, or if an electrical machine wouldn’t work because of inadequate power, etc. Ditto, by the way, for chemistry: even though chemical bonds and the differentiation of the elements are best explained at the quantum scale, there were many things we could reliably and repeatably do with molecules, metallurgy, etc. long before the existence of atoms was confirmed.

    We became aware of quantum phenomena through a series of experiments that were not so closely related to usual life. It turns out that at room temperature and pressure, quantum effects tend to be important only on very small length scales, roughly around 10^(-8) to 10^(-10) meters. (Some macroscopic quantum phenomena, such as Bose-Einstein condensation, need very low temperatures, and others, such as the production of light by the Sun and other stars, need very high temperatures and pressures. The discovery of “cathode rays,” one of the macrophenomena that motivated early quantum theory, required low pressures and high voltages.) Even the original two-slit experiment by Thomas Young in the 19th Century could be explained by classical physics: light behaved as a wave.

    And there are still plenty of unexplained everyday things — some of them quite difficult, such as explaining why drying liquids leave rings, or the behavior of waves in a string that’s got various pieces of cheap hardware suspended from it — where you don’t need quantum physics at all. (Nor, apropos of the SSC, a lot of money: I once studied with someone in the UCLA Physics Department who made the cover of Nature in the 1990s for an experiment whose most expensive piece of equipment was something he bought for about $350, using his MasterCard.)

    B. Economics Treatment:

    Now lest’s consider how quantization is treated in economics. One of the creators of neoclassical economics, W.S. Jevons, began with the a priori notion that to be scientific, economics had to use differential calculus [1]:

    My theory of Economics, however, is purely mathematical in character. Nay, believing that the quantities with which we deal must be subject to continuous variation, I do not hesitate to use the appropriate branch of mathematical science, involving though it does the fearless consideration of infinitely small quantities. The theory consists in applying the differential calculus to the familiar notions of wealth, utility, value, demand, supply, capital, interest, labour, and all the other quantitative notions belonging to the daily operations of industry. As the complete theory of almost every other science involves the use of that calculus, so we cannot have a true theory of Economics without its aid. [1, emphasis added]

    To be fair to Jevons, he paid a lot of attention to dimensional analysis, and explicitly discusses the problem of discrete or “indivisible” goods in Chapter 4 of his Theory of Political Economy, in a separate section called “Failure of the Equations of Exchange.” It’s clear, though, that his emphasis in the book is on commodities ordered in vast bulk, which even though quantized — such as into grains of wheat, leaves of tea, etc. — can be treated as if continuous. This sets a political and social context for his work, which was shared in Alfred Marshall’s Principles of Economics [2], which introduced the now-ubiquitous demand curve diagrams. But unlike the forthright Jevons, when Marshall discusses an individual’s demand for tea purchased by the pound he disposes of the problem by assuming (i) it would be possible to watch how the individual’s utility changed with “infinitesimal variations in his consumption,” and (ii) that this would follow a continuous, downward-sloping demand curve (id., 125-126, 126n). Marshall also believed these problems would disappear by using a market demand curve, made from adding up individuals’ curves — and it was the market that interested him more. (Id., 128.)

    Indifference curves were introduced by the next generation of neoclassicals, among them Pareto and Slutsky. Ultimately this led not only to Varian’s statement quoted above (though contrast his “30 or 40” with Marshall’s infinitesimals) but to the diagram and exposition in the world’s leading introductory economics textbook showing various individuals’ demand for continuous quantities of ice cream cones [3:42-43], and an individual’s indifference for various bundles of Pepsis and pizzas varying continuously [id., 263]. Most students who take a course using that textbook will go on in majors or careers other than economics, so they’ll never learn different points of view.

    C. Comparison

    Classical physics ignored quantization because it was unaware of it in nature. Nonetheless, it’s successful enough in many cases, it was an adequate framework for falsifying (in K. Popper’s sense) certain theories (such as the ether), and it was itself capable of being falsified in cases where it clearly didn’t work (e.g. at atomic scales).

    Neoclassical economics ignores quantization because of an a priori idea that a certain type of math should be applicable to it in order to gain it the prestige of being a science. Its founders’ scruples about using such math in cases where it was empirically inapplicable — one cannot buy infinitesimal quantities of Pepsi or pizza, much less of guns or iPads — were brushed aside or effaced by generation after generation of their followers. So it willfully ignores the pertinent realities of everyday experience: quite the opposite of classical physics.

    Moreover, compared to theories in classical physics, most published theories in economics are neither true nor false (sc., when their performative elements are removed: we neglect cases where everyone, say, deliberately prices options according to some a priori theory: see, e.g., [4], [5]). They’re all toy models. They don’t claim to represent the real world of economics but only to “isolate” certain features of it, although their propounders may attempt to draw conclusions about the real world when it serves their interests to do so.

    In the case of economists like Marshall, who lay the groundwork for dubious aggregation processes in economics, the continuous curves he was drawing represented a quantity — utility — that was not only impossible to measure directly, and impossible to measure in a manner required by his chosen mathematical framework for the individual case (utility of infinitesimal quantities of a commodity), but that also posed numerous other difficulties, such as interpersonal comparison, that have to be overcome in order to achieve the aggregates he used to justify his use of differential calculus. Plus, as Sonnenschein, Mantel and Debreu discovered, even as a purely mathematical exercise those aggregation methods don’t yield a unique demand curve if equilibrium is required. This led to economists adopting the fiction that an economy consists either of only one consumer or of many identical consumers (the “representative agent”). Cf. commentary by Perry et al. [6]

    So, attempting to equate the failings of classical physics with those of neoclassical economics is not at all persuasive.

    REFERENCES

    [1] Jevons, W. [William] Stanley (1888)[1878] The Theory of Political Economy. 3rd ed. Edited by Harriet Jevons. London: Macmillan: p. 3. The text is from the 2nd ed. 1879, pagination from a posthumous 3rd ed. 1888.

    [2] Marshall, Alfred (1920) Principles of Economics: An Introductory Volume. 8th ed. London: Macmillan.

    [3] Mankiw, N. Gregory. (2012) Principles of Economics. 6th ed. Mason, OH: South-Western Cengage Learning.

    [4] MacKenzie, Donald (2008) An Engine, Not a Camera: How Financial Models Shape Markets Cambridge, MA: The MIT Press.

    [5] MacKenzie, Donald et al., eds. (2008) Do Economists Make Markets?: On the Performativity of Economics Princeton: Princeton University Press.

    [6] “You lie like a priceless Persian rug / On a rich man’s floor / Well, you lie like a coon dog basking in the sunshine / On my porch /The way you lie like a penny in the parking lot / At the grocery store / It just comes way too natural to you” http://www.youtube[dot]com[slash]watch?v=pCwLsXZnFl4

    • A.J. Sutter
      March 21, 2021 at 9:25 am

      A Charlie Brown moment, HTML-wise. I tried .

    • March 21, 2021 at 4:18 pm

      A.J.S. (Andrew?): My notion of communication is that starting from scratch is impractical, and so one seeks some common understanding to build on. But all too often we get side-tracked from the original topic as it turns out our understanding – in this case of physics – isn’t so common after all. In this case my main point is that ‘here be dragons’. My hope is that someone like Marcus would help establish some common understanding on which we could build a meaningful discussion of the issues. Meanwhile, we do the best we can.

      1. On topology, I should have made it clear that I was not talking about topology as a branch of mathematics, but as something that gets deployed in arguments about practical action in much the same way as economists refer to calculus and probability: without ‘proper caution’.

      More broadly, the facts about Bott and Chen do not make them ‘old school’, no more than my own motivation in connection with various miscarriages of justice (or more recently, pandemics) makes me old school. What matters is actual attitude, which I defer to Marcus to try to explain. You would probably have to have known them to know if they were ‘old school’ or not: reading their published papers tends not to be very helpful on this point.

      More generally, the ‘schools’ have got mixed up in the last 100 years, which is why Marcus’ historical account seem so helpful.

      2. You say ‘nature is quantized’. This seems to me a very ‘old school’ remark (sorry). I would rather say either (i) that the mainstream view among physicists has been that it is reasonable to treat nature as quantized or (ii) that nature is not continuous. Both seem to me much less contentious. You might say that ‘nature is quantized beyond reasonable doubt’, but then I would say that physicists have often ended up contradicting ideas that they had previously regarded as similarly beyond reasonable doubt.

      On the differences between physicists and economics: yes, clearly huge. I was simply trying to say that just having a degree in physics doesn’t make you immune from the kind of pressures which can lead to you going along with some over-simplification of the issues .Going back to topology, it can be successfully applied to networks. But can we rely on its findings for all real things than we call ‘networks’?

      Your (C) seems reasonable, but I am not a historian. My main point is that (as Lars says) we shouldn’t just carry ‘best practice’ across from one field to another and assume its going to be at all reasonable. But also, if the ‘best practice’ doesn’t work out, we might still consider modifying it, thoughtfully. (E.g. in using ‘mathematical models’.)

  12. March 21, 2021 at 3:28 pm

    Dear A.J. Sutter

    Your comparison is very instructive. We should learn from physics more.

    I believe I have written a similar remark somewhere but I could not find it. So, I write it again.

    I have an impression that both mainstream and heterodox economists have a common misunderstanding that economics has successfully learned from physics. The only difference is that one assumes that economics is therefore scientific and the other consider that it is the origin of why economics went wrong. Those critical economists often talk about it by the term “physics envy.” In my opinion, this criticism is not good. Economics had not learned from physics deeply. We have much more to learn from the history of modern physics (I mean from Galileo and Kepler). Sutter taught us the same lesson.

    P.S. Raoul Bott is one of my heroes when I was a (half-baked) mathematician. I was working on Atiyah-Singer theory. I knew Bott periodicity theorem. I read Milnor’s Morse Theory and had a deep impression on the depth of Bott theorem. But, at that time, I did not know that he was first trained as an electric engineer. When I read “Raoul Bott” in Wikipedia, I was astonished to know that he wrote PhD thesis under the direction of Richard Ruffin, the name of which came to me much later when I was interested in chaos theory.

    • March 21, 2021 at 4:23 pm

      I caution that there may be a significant difference between learning from some simplified story about old-school practices and a ‘deeper’ learning from the ‘real deal’. One should also not assume that an assumption that is ‘pragmatic’ in one filed will be realistic in another. But, yes, great potential.

    • March 22, 2021 at 7:58 am

      Dave,

      yes, we should be cautious in various ways when we talk about “learning from physics”. When economists (mainly heterodox economists) talk about “physics envy”, it seems to mean an imitations of forms. For example, Mirowski in his book More Heat than Light (1989) points us, for example, the case of Stanley Jevons. Jevons was a chemical engineer and had an idea that physics method can also be useful for economics. Mirowski thinks that utility function was thought to be a similar entity like energy and used as such by Jevons. This is the case of an imitation of forms.

      When I say we must learn from (the history of) of modern physics, we should learn from the sinuous history of the modern physics before and after it emerged as modern science. It took many years (even centuries) from Aristotelian idea to the concepts of impetus and inertia. Galileo’s experiments of falling bodies on an inclined lane was an effort to examine whether the concepts of inertia and force work well in reality. However, as soon as Newtonian mechanics had been established, nobody continued to question its premises. In the Continent, rational mechanics (mécanique rationnelle) took place. This must be a keyword when we investigate the origin of the rationalist thought among some neoclassical economists like Gerard Debreu.

      Roberto Marchiotti and Fiorenzo Mornati (2016) “Economic theories in competition” remarks that

      Walras maintained that pure economics is “a physical-mathematical science like mechanics” that uses the “rational method” and not the “experimental method” (Léon Walras 1900, p.71). Therefore theory is not confirmed by experience but by the structure of theorems and proofs.

      This is understandable when we know “rational mechanics.” In English Wikipedia, there is only a brief comment on this term, but in French Wikipedia, we have a full article on it. The latter defines “rational mechanics” in this way:

      La mécanique rationnelle est une discipline mathématique visant à ériger les théories mécaniques dérivées de la mécanique de Newton … en un corpus régi par des définitions et des axiomes, de sorte qu’elles deviennent des sciences hypothético-déductives, susceptibles de jugements a priori.

      We can, thus, detect a lineage: “mécanique rationnelle” > Walras > Debreu.

      N.B. This is in contradiction to the Weinstraub and Mirowski, and Moczard who claim hat Debreu’s attitude on mathematical economics derives from Boubaki. We know that his idea draws on the tradition is much older than Weintraub and others imagines.

      Why did this kind of philosophy of science emerge? Wikipedia (french) explains the circumstances of the emergence and endurance of rational mechanics, the terms of which seems to be coined by Auguste Comte:

      Le positivisme d’Auguste Comte pesa lourd en France sur le contenu de l’enseignement scientifique, la mécanique étant par exemple du ressort des professeurs de mathématiques des facultés des sciences, et non des physiciens. La mécanique rationnelle s’est substituée très longtemps dans l’enseignement des classes préparatoires à la mécanique. Elle présentait plusieurs avantages : peu onéreuse, elle ne nécessitait qu’un papier et un crayon, au lieu de coûteuses expériences et démonstrations en laboratoire; elle permettait de renforcer le cours de mathématiques, sous couleur de physique; elle véhiculait tacitement l’idée: 1°) qu’il y a des vérités définitives en science; 2°) que la forme aboutie de la connaissance scientifique est mathématique ; 3°) que tous les phénomènes de la nature sont réductibles à un principe de conservation (ou d’invariance) très général.

      It is quite ironical that the philosophy of positivism helped to give birth of this rationalist philosophy and influenced economics in an exactly opposite way than positivism.

      • March 22, 2021 at 2:00 pm

        Thanks. I have never understood why English speakers tend to read the French in odd ways, or why even many English speakers have such confused views about ‘rationality’. It does seem unfortunate that such issues are of relevance to economics!

  13. A.J. Sutter
    March 22, 2021 at 10:18 am

    @ Dave & Yoshinori

    Once again thanks for your comments, and also for looking past my bad formatting to get to the substance of my comments.

    0. Andrew or Andy is fine.

    1. I am thoroughly confused about the “schools” now. I’ve searched on the term in a Kindle copy of The Music of the Primes, and pretty much all but two or three of 36 incidences of the term are quite literal: a place with desks and classrooms, etc. The collocation “old school” doesn’t occur at all in it, and “new school” occurs only once, with the literal meaning of ‘school.’ So the references to the book or its author and the usage of those terms aren’t really getting your point across. I invite you to be more specific, if you care to be.

    As for attitude, I can’t be confident of having any idea about the attitude of Prof Bott towards his subject, but I can say that in my encounters with him he was charming, warm and funny. I was hardly a star pupil: when I got back my examination blue book for his course, most of my proofs were adorned with one scrawled word: “Salad!” Perhaps this makes him “old school”: I don’t think they make too many profs like him anymore.

    2. A. First, to clarify a general point, I’m not saying that economics should be more like physics across the board. It should be more like physics in (i) formulating its theories based on observation of reality, instead of a priori adopting particular mathematical techniques and metaphors (such as equilibrium) from physics and viewing economic phenomena through that prism, (ii) aiming for theories that can be falsified empirically, (iii) discarding falsified theories, (iv) avoiding fallacies of composition, etc.

    But in fact I don’t believe that economics can or should ever be like physics — and that can be OK. E.g., I believe physical theory shouldn’t consider ethics or political power (though the practices of physicists in society often should), but that economics should consider them. So I believe neoclassical economic theory has staked out undesirable political and ethical positions by pretending to exclude those topics, as John Kenneth Galbraith wrote about often.

    My personal belief, which might not be shared universally on this blog, is that politics is at least as fundamental to the social sciences as is economics. So making economics more like physics isn’t at all what I’m aiming at, even though physics has some traits that economics might emulate as a discipline. (I realize I haven’t made this point in prior comments.)

    An analogy: I would like to be more like Suzuki Ichiro, former outfielder for the Seattle Mariners. Namely, I would like to have his strong discipline, so that I could become more excellent in my field in the way he was in his. However, I would not like to be more like him in his hair styling, his diet of chicken wings before major professional engagements, nor at all in pursuing baseball or any other sport as my career.

    Note especially to Yoshinori: This doesn’t exclude that mathematics might be useful in describing certain economic phenomena, such as the behavior of financial markets. In the same way, certain mathematical tools might be useful for describing certain phenomena in biology or ecology. At least in biology, the use of such techniques can co-exist with other approaches: their use doesn’t necessitate the reduction of all aspects of biology to a mathematical theory.

    A corollary of this is that I agree that wholesale importation of “best practices” from one discipline to another often won’t be a good idea. But if discipline X has some robust techniques for limiting the propagation of bull**** while discipline Y is particularly deficient in such techniques, some importation or at least adaptation of those techniques into Y might be worth looking into.

    B. On topology, I’m no expert (cf. salad-related comment above), but I think that clearly topology can’t tell you everything interesting there is to know about a network. E.g., the resistors in a network may be in some contexts be analogous to edges in a graph or to surface elements, and the negative spaces bounded by them may sometimes be analogous to holes in a surface, but much of the time what’s flowing through the resistors will also be of interest, and topology doesn’t say much about that. See, e.g., this 1965 paper by R.J. Duffin:

    Click to access 82176898.pdf


    where graph topology becomes useful to understand an electrical network when combined with Ohm’s law and Kirchoff’s laws. This is all the more true of communications networks, where we may be concerned with the content of messages (e.g. “fake news” being spread through social media), as well as with the topology of the flow.

    C. Finally, apropos of

    You say ‘nature is quantized’. This seems to me a very ‘old school’ remark (sorry). I would rather say either (i) that the mainstream view among physicists has been that it is reasonable to treat nature as quantized or (ii) that nature is not continuous.

    I’m not sure what the claim is here. I’m fine with qualifying my remark about nature being quantized by saying that’s how it’s regarded in the idealization of contemporary physical theory; I’m agnostic as to whether it is true or not in the “Lebenswelt,” the domain of “Dinge an sich, etc. If that’s what you’re getting at, we’re in agreement.

    Your reference to “mainstream” throws me off a bit: do you mean that there are physical theories that operate at scales comparable to the Planck length but deny that the Planck length (or any other length) operates as some minimum granularity scale for the physical universe? And by “discontinuous” do you mean in the sense of, for example, the real line minus {0} (or even the real line minus the integers), but not discrete? If so, I’m not familiar with such theories, so some references would be appreciated.

    PS: @Yoshinori: Thank you for the very interesting information about mécanique rationelle! That’s something I must look into more. But is it possible that this also made the French intellectual climate more receptive to/generative of Bourbaki? It’s a mystery why Bourbaki ignored Gödel’s result on undecidable propositions — or maybe not so mysterious, since it sort of undermined their project. Plus, they didn’t adopt category theory even though Eilenberg had been a member of Bourbaki; MacLane suggested that Bourbaki were wedded to the notion of “échelle de structure” that they had published in an early volume in 1939, and just didn’t want to revise that foundational volume. Bourbaki’s rigidity, in other words, certainly seems to have a lot in common with Debreu’s approach: perhaps due to a common influence?

    • March 22, 2021 at 2:10 pm

      Andy, Please disregard references to schools and physics where they confuse. Such cultural references can misfire.

      But as you ask … my understanding is that some (respectable?) physicists are, in effect, questioning the conventional wisdom, that “the Planck length… operates as some minimum granularity scale for the physical universe”. But even if they weren’t it is not ‘truly’ scientific to be overly dogmatic on these things.

  14. Gerald Holtham
    March 22, 2021 at 12:32 pm

    The use of differential or integral calculus can be justified in macroeconomics. When GDP is measured in trillions of units, expenditure of one dollar or even a hundred dollars can be treated as effectively infinitesimal. At micro scales calculus becomes less useful, which is why a Soviet economist concerned with the operation of individual factories invented linear programming to replace optimization of continuous functions. This illustrates a paradox. Many economists regard microeconomics as “scientific” while they think macroeconomics cannot be made so because, in spite of DSGE etc, they know its regularities cannot be derived from axioms of individual behaviour. From my perspective the reverse is nearer to being true. The axioms of individual behaviour used in economics are simply assumptions and they are largely unsupported by laboratory testing or the insights of psychologists and social psychologists. Whereas macroeconomics relies on the law of large numbers. Individuals may be unpredictable but behavioural tendencies in a society may be revealed by observing the behaviour of aggregates. Engineers know that road traffic flows are generally stochastically predictable without knowing what any single driver will do and economists are in the same position. When incomes increase people tend to spend more. If a class of goods becomes more expensive, people tend to buy relatively less of it. Hard headed business firms find the concept of the price or income elasticity of demand useful. The individual demand curve may be illegitimate but it is a useful notion or serviceable approximation when dealing with large aggregates. The hunt for “microfoundations” has been wholly misdirected. Economics will progress when we understand more about aggregation and interaction effects in disorderly systems and how a kind of stability (not “equilibrium”) can be generated and disturbed. Even then economics will not escape the fact that results are conditional on a changing social and political environment.

    • March 22, 2021 at 2:19 pm

      Gerald, you say ” macroeconomics relies on the law of large numbers” and “economics will not escape the fact that results are conditional on a changing social and political environment”.

      This seems to me very dismal, and makes economics irrelevant to my own concerns about economies. Maybe you should refer to ‘conventional macroeconomics’ and ‘conventional economics’ instead, or propose some other terms for the study of real-world economies, considered broadly? Then maybe a new ” Economics will progress when we understand more about aggregation and interaction effects in disorderly systems and how a kind of stability (not “equilibrium”) can be generated and disturbed. “? ;-)

  15. March 22, 2021 at 1:31 pm

    Dear Dave Marsay,

    the book of Marc de Sutoy’s The Music of the Primes has arrived. Would you teach me number of the chapter and the rough place in the chapter? Pages must be different for my Japanese translation.

    • March 22, 2021 at 2:29 pm

      I found chapter 5 ‘the mathematical relay race …’ and particularly the section on ‘Hilbert, the mathematical pied piper’ insightful on the French attitude (which I summarise as ‘old school’). Your quotes above seem somewhat in the same vein.

      If I can quote Gerald: “Engineers know that road traffic flows are generally stochastically predictable without knowing what any single driver will do and economists are in the same position.” Some engineers sometimes talk as if traffic flows are necessarily stochastic, and hence that the corresponding probabilistic and statistical methods are always appropriate. It seems to me that economies are also ‘generally’ ‘stochastically predictable’ but the times when they aren’t (financial crises, pandemics, shifts in power balances, …) are also in need of some appropriate theory.

  16. March 22, 2021 at 2:27 pm

    Dear Andy and Dave,

    Note especially to Yoshinori: This doesn’t exclude that mathematics might be useful in describing certain economic phenomena, such as the behavior of financial markets. (A.J. Sutter on March 22, 2021 at 10:18 am, in 2.A, just before 2.B)

    Yes, of course. That is why I deem myself mathematical economist as my profession. But methodological thought like rational mechanics has so deeply entrenched in economics, we have to fight against it. The re-discovery of rational mechanics was a great by product for me. I must thank Andy and Dave for it.

    As for the behavior of financial markets, I am thinking that there is no good approach for the moment than phenomenological studies like An Introduction to Econophysics by Mantegna and Stanley (1999) or Chapter 4 “Financial Markets: Bubbles, Herds and Crashes” in Alan Kirman’s Complex Economics: Individual and Collective Rationality (2010). Do you know any good study or theory?

    • March 22, 2021 at 2:33 pm

      No. But a copy of Kay and King’s ‘Rational Uncertainty’ has just arrived, and I have some hopes of it. Do you have anything from Kirman that might provide common ground for study from a variety of perspectives, including my ‘pedantic mathematical’?

  17. March 22, 2021 at 3:24 pm

    Andy,

    PS: @Yoshinori: Thank you for the very interesting information about mécanique rationelle! That’s something I must look into more. But is it possible that this also made the French intellectual climate more receptive to/generative of Bourbaki?

    Yes, it is. But, the picture we have on Bourbaki may be a bit different between us. Are you saying this after you have read my whole comment on Moczard’s Chapter 8? If not, please read them here and my comment on March 15 there (the fifth comment from the first). If you have still some points that you feel uneasy with my account, let us start our discussion.

    If you cannot get access to the ResearchGate, I can send you PDFs or I can invite you to the ResearchGate.

    • A.J. Sutter
      March 22, 2021 at 4:02 pm

      Yoshinori, I’m on ResearchGate, I’ve read the chapter and your comments, and I don’t have any uneasiness. I don’t have any special understanding of the impact of Bourbaki on Debreu, so I am willing to believe you may have gotten some points right where Móczár may have gotten some wrong — I’m agnostic. Certainly there are similarities in style between Bourbachique writings and Theory of Value, although the latter has a few too many pictures to truly exemplify the genre.

      I understand you used to be a fan, but I immediately detested Bourbaki from the first time I tried to read in Dieudonné’s real analysis text during college, and I immediately gravitated toward Vladimir Arnol’d’s style of writing once I discovered it a few years later. (He was a bitter foe of Bourbaki.) To that extent we may have a different picture of B., but not necessarily about his influence (or not) on Debreu.

      • March 26, 2021 at 4:28 am

        Andy,

        > I’m on ResearchGate,

        That’s good. I have been reading these days to know how Walras thought. Please read my comment that was posted here.

        I willingly admit that Bourbaki and Debreu have a deep similarity in their style. On this part, there must be an influence from Bourbaki to Debreu. But, it concerns the form of exposition. Even on this aspect, Debreu was not totally impressed by Bourbaki, as he recollected in his interview by Weintraub:

        After a year or so (I entered in the fall of 41), I began to wonder whether mathematics, at that time, was becoming too very abstract under the influence of Bourbaki — though not so very dominant as it later became (though maybe I anticipated that development). I had to decide whether I wanted to spend my entire life doing research in a very abstract subject. (Debreu 1992 Interview with Roy Weintraub. Cited from pp.259-260 in Weintraub & Mirowski 1994)

        What I want to inquire is from what basis Debreu came to claim that economics can be detached from the economy we can observe. Read again this part of Debreu’s (I have cited somewhere, but I cannot find it):

        In these directions, economic theory could not follow the role model offered by physical theory. Next to the most sumptuous scientific tool of physics, the Superconducting Super Collider whose construction cost is estimated to be on the order of $10″‘ (David P. Hamilton, 1990; see also Science, 5 October 19901, the experiments of economics look excessively frugal. Being denied a sufficiently secure experimental base, economic theory has to adhere to the rules of logical discourse and must renounce the facility of internal inconsistency. (Debreu 1991, p.2)

        This is, I believe, contradictory to Bourbaki’s mathematical philosophy, because Bourbaki, I believe, never imagined that mathematics can directly tell something on the existing reality out of mathematical world. Mathematics can be a useful tool of analysis and the framework for natural and social sciences but never can give direct knowledge on the objects of sciences. On this point, formalism does not matter. The point is that Bourbaki group came after the discovery of non-Euclidean geometry. Two contradictory axiomatic systems were discovered. The thrust of modern mathematics is it has abandoned any links with “real worlds” (Of course, there were various demerits on this strategy, as Morris Kline accused it severely).  

        The fact that axiomatic system by Debreu is logically consistent does not tell anything on existing economy. If we can find another axiomatic system (as we have done in our book) that gives essentially different picture of the economy, it is no more logical consistency that determines the “truth” of two systems.

        So, I am curious to know from where this peculiar idea of Debreu came. One of solutions may be that he was influenced by Walras, who had a Platonic metaphysical thought (to my astonishment). This possibility does not seem to be explored sufficiently. Weintraub and Móczár see only the apparent similarity but not the philosophy on the economics as science. (I am sorry that I have often spelled Moczard instead of Moczar.)

        Other possibilities must be explored. For example, it is possible that Debrue was directly influenced by rational mechanics of the 19th century France. Another possibility is he was influenced from Maurice Allais whose book triggered Debreu to mathematical economics.

  18. March 24, 2021 at 11:55 am

    I am reading Kay and King’s ‘Radical Uncertainty’, and now wish to comment afresh on Lars’ piece. (Not that I resile from what I said before, but that the following may be more effective.)

    As a mathematician I often get puzzled by how people view mathematics, and in particular by what Romer calls ‘mathiness’. In practice, in many fields, people often refer to game theory
    yet without any apparent understanding of it. For example, the seminal text says:

    “The current assertions concerning free competition appear to be very valuable surmises and inspiring anticipations of results. But they are not results and it is scientifically unsound to treat them as such as long as the conditions which we mentioned above are not satisfied” [2.4.2 in 3rd Ed.]

    Yet https://en.wikipedia.org/w/index.php?title=Game_theory&oldid=1011854501 seems to support Lars’ view, that mathematical game theory is the work of the devil, or at least delusional. To channel Kay and King: what is going on here?

    (According to 4.6.3 “[G]iven the same physical background different “established orders of society” or “accepted standards of behavior” can be built, all possessing those characteristics of inner stability which we have discussed. Since this concept of stability is … operative only under the hypothesis of general acceptance of the standard in question these different standards may perfectly well be in contradiction with each other.” I find this hard to reconcile with ‘mathiness’.)

    • A.J. Sutter
      March 24, 2021 at 2:07 pm

      Sorry, your point in this comment is a little unclear, esp. to those of us who have not read Kay & King. Please allow me some questions, because I don’t have any idea of what you’re getting at.

      1. Are you saying that the Wikipedia article “supports” Lars’s view because (a) you believe the article itself agrees with his view of game theory, or because (b) you believe it is evidence that GT is delusional/diabolical?

      2. OK, I did figure out that the “seminal text” is von Neumann & Morgenstern. The passage you quote (4.6.3) is embedded in a very complex context. In particular, “established orders of society” and “accepted standards of behavior” have very particular significances in vN&M that don’t necessarily conform to their meanings in ordinary usage. More specifically, they refer to sets of imputations (distributions) of the proceeds of a game. For readers who want more background as to what vN&M meant, some pertinent excerpts are at the end of my comment.

      Beyond that, however, it seems clear to me from re-reading this many years after I first read it that vN&M’s ambitions for their theory were at least a tad delusional control-freaky (see excerpts below). They themselves are pretty good (albeit perhaps hostile, in the legal sense) witnesses for Lars’s side of the argument.

      3. Finally, I couldn’t make out what your stance is on Romer’s article (“Mathiness in the Theory of Economic Growth,” AER 105(5): 89–93 (2015)). Is it Romer’s article that puzzles you, or the attitudes of the third parties whom he accuses of mathiness? Romer’s article is interesting. Almost immediately we encounter:

      Economists usually stick to science. Robert Solow (1956) was engaged in science when he developed his mathematical theory of growth. But they can get drawn into academic politics. Joan Robinson (1956) was engaged in academic politics when she waged her campaign against capital and the aggregate production function.

      First sentence: wow, as readers of this blog post and comment thread will appreciate. Then: what is the point of this homage to Solow and trashing of Robinson so many decades after the Cambridge controversies if not academic politics? Later, there’s this:

      Let q stand for individual consumption of mobile phone services. For a ∈ [0,1], let p = D(q) = q^{−a} be the inverse individual demand curve with all-other-goods as numeraire. Let N denote the number of people in the market. Once the design for a mobile phone exists, let the inverse supply curve for an aggregate quantity Q = qN take the form p = S(Q) = Q^b for b ∈ [0,∞]

      Apropos of mathiness, what are the dimensions of q and p? Also, and maybe this is simply my ignorance, how can you have “all-other-goods” as a numeraire? Isn’t the point of a numeraire to choose one good and express all other goods in terms of that one?

      Anyway, as for what is puzzling, definitely Romer’s article is a member of that set, IMHO.

      ##Background excerpts from von Neumann & Morgenstern 3rd (1953)

      4.1.1. We have now reached the point where it becomes possible to give a positive description of our proposed procedure. This means primarily an outline and an account of the main technical concepts and devices. [¶] As we stated before, we wish to find the mathematically complete principles which define rational behavior for the participants in a social economy, and to derive from them the general characteristics of that behavior. And while the principles ought to be perfectly general i.e., valid in all situations we may be satisfied if we can find solutions, for the moment, only in some characteristic special cases. [¶] First of all we must obtain a clear notion of what can be accepted as a solution of this problem; i.e., what the amount of information is which a solution must convey, and what we should expect regarding its formal structure.

      4.1.2. The immediate concept of a solution is plausibly a set of rules for each participant which tell him how to behave in every situation which may conceivably arise.

      4.1.4. We described in 4.1.2. what we expect a solution i.e. a characterization of rational behavior to consist of. This amounted to a complete set of rules of behavior in all conceivable situations. This holds equivalently for a social economy and for games. The entire result in the above sense is thus a combinatorial enumeration of enormous complexity. But we have accepted a simplified concept of utility according to which all the individual strives for is fully described by one numerical datum (cf. 2.1.1. and 3.3.).

      4.2.1. We have considered so far only what the solution ought to be for one participant. Let us now visualize all participants simultaneously. I.e., let us consider a social economy, or equivalently a game of a fixed number of (say n) participants. The complete information which a solution should convey is, as we discussed it, of a combinatorial nature. It was indicated furthermore how a single quantitative statement contains the decisive part of this information, by stating how much each participant obtains by behaving rationally. Consider these amounts which the several participants ”obtain. ” If the solution did nothing more in the quantitative sense than specify these amounts, then it would coincide with the well known concept of imputation: it would just state how the total proceeds are to be distributed among the participants. [footnotes omitted]

      4.4.2. The notion of domination … is clearly in the nature of an ordering, similar to the question of preference, or of size in any quantitative theory. The notion of a single imputation solution corresponds to that of the first element with respect to that ordering.

      4.5.1. [O]ur task is to replace the notion of the optimum i.e. of the first element by something which can take over its functions in a static equilibrium.

      4.5.3 … A set S of elements (imputations) is a solution when it possesses these two properties: [I omit these as not relevant to the present discussion]

      4.6.1 … [I]t appears that the sets of imputations S which we are considering correspond to the “standard of behavior connected with a social organization. Let us examine this assertion more closely. [¶] Let the physical basis of a social economy be given, or, to take a broader view of the matter, of a society. According to all tradition and experience human beings have a characteristic way of adjusting themselves to such a background. This consists of not setting up one rigid system of apportionment, i.e. of imputation, but rather a variety of alternatives, which will probably all express some general principles but nevertheless differ among themselves in many particular respects. This system of imputations describes the “established order of society” or “accepted standard of behavior.” [footnotes omitted]

      • A.J. Sutter
        March 24, 2021 at 2:23 pm

        PS to my prior post: doesn’t Romer’s razzle-dazzle in the math excerpt simply boil down to b = –a – logN if we’re going to be dimensionally homogeneous?

        In which case, how can we have b ∈ [0,∞], since both a and logN are > 0?

        Am I missing something?

      • March 24, 2021 at 5:04 pm

        Andrew, Thanks for your response. I shall l try to be clear. (Not something mathematicians are renowned for, I’m afraid, hence my earlier reference to Marcus.)

        0. If you haven’t read Kay and KIng, its maybe a distraction at this stage. (But I do think we need some such text that might support more constructive debate.)

        1.(a) I think that Lars could easily read wikipedia and not be disabused of his (to me) outlandish notions. (b) It is evidence that GT is widely regarded as supporting views that I disagree with. (If it is ‘evidence’ that GT is actually delusional, I totally discount it I’m afraid: I’ve read it for myself.)

        2. Your first para. If, as you say, the paragraphs I quote only apply to a particular theory of economics in some obscure (to me) technical sense, then am I not free to simply regard the whole thing as a ‘mathematization’ of Morgenstern’s views, and not necessarily applicable to contemporary circumstances?

        On your second para: Having read Kay and King it suddenly occurred to me that the GT could ‘reasonably’ be read in the way that you have. Thank you for confirming that. It seems to me the substantive issue is that (as in so many other areas of life) it is perfectly possible to have familiar texts that different ‘schools’ agree with, but based on completely different interpretations. (This illustrates the point Marcus seemed to be making in terms of ‘French’ and ‘German’.)

        3. On Romer: I am not fluent in ‘econo-speak’ (as you may have noticed) and do not claim to have made much sense of him. But I do think there is a problem with ‘mathiness’. (My reading of Kay and KIng so far is that the problem is not with the mathematics as such, but with the understanding of it. I take this distinction more seriously than , e.g., Lars. Surprising?)

        On GT, this was written in 1944. How are we to interpret its use of the terms “established order of society” or “accepted standard of behavior” with respect to the German economy? Or to the various changes to the German economy, 1900-2000? You seem to be reading GT as an argument that ‘there is no alternative’, rather than pointing to ways one might think of subverting the “established order of society” or “accepted standard of behavior.” I’m not sure why you should read it like that, but it seems to me that you are not alone!

        (Maybe one day we could discuss Shakespeare over a pint or two? It might be more fruitful than fiddling with equations while economies ‘burn’. ;-) )

    • A.J. Sutter
      March 24, 2021 at 4:22 pm

      Sorry, I messed up my math (March 24 14:23). Will correct forthwith.

      • A.J. Sutter
        March 24, 2021 at 5:40 pm

        OK, I think my math is a bit better, but result is still mathy.

        We have p = q^{-a} = Q^b.

        So we also have q^{-a} = (qN)^{b}.

        Then:
        a log q = b(log q + log N)

        So
        b = (-a log q) / (log q + log N)

        Now, a ≥ 0 by hypothesis. If it equals zero, then p seems pretty uninteresting, so I’ll assume the strict inequality holds. Then if q > 1, which is plausible because it’s some measure of consumption, and N > 1, which is plausible because it’s related to the number of people using mobile phones, then b must be < 0 because it's the negative of a positive quantity, so it can't be ∈ [0,∞].

        The only escape from this contradiction is for q ≤ 1; again because equality would lead to banality, I’ll assume strict inequality to hold. Romer never tells us what the range of q is. If it were limited to the interval [0, 1], that would seem a pretty significant fact to mention. The fact that this is omitted leaves us unsure about the meaning of q, and undermines this narrative. So one can conclude that this display of algebra is intended more to bedazzle than to enlighten, which seems pretty close to Romer’s notion of mathiness.

        Apologies for my earlier confusion.

  19. A.J. Sutter
    March 24, 2021 at 5:55 pm

    @Dave Thanks for your responses. Given the inequality in the world today, I don’t think the notion that some imputation or set of imputations is the “established order of society” is very palatable. But more fundamentally, I think that a vision in which “all the individual strives for is fully described by one numerical datum” just makes a toy out of GT.

    As for Shakespeare, I concur. Please come to Japan: there’s some pretty decent beer here, and other things besides.

    • March 24, 2021 at 9:07 pm

      @Andrew I think the key issue here is that you seem to read key passages in a quite different sense to me, and reach quite opposite conclusions. Reading 4.6.1 again, but out of context, I must say I have some sympathy with your view.

      I have the 3rd (1953) Edition. It has an ‘Index of Names’ that includes many who seem to me to have ‘sound views’ on this type of subject. For example, Hilbert caught my eye, which led me to this:

      “This [approach] is analogous to the present attitude in axiomatizing such subjects as logic, geometry, etc. Thus, when axiomatizing geometry, it is customary to state that the notions of points, lines, and planes are not to be a priori identified with anything intuitive, — they are only notations for things about which only the properties expressed in the axioms are assumed. Cf., e.g., D. Hilbert : Die Grundlagen der Geometrie, Leipzig 1899, 2nd Engl. Edition Chicago 1910.”

      This seems to me a good caution against ‘mathiness’ and a refutation of Lars’ views about GT (as distinct from Lars views about mainstream economists views on GT). At least it seems to me to support my supposition that I could read it ‘logically’, as distinct from giving priority to secondary sources.

      In terms of economics my reading of GT is that ‘if you are happy to use indifference curves in the (then) accepted way, then you may as well assume simplistic utility functions.’ I haven’t checked the detail of this argument, as I’ve never come across a situation in which conventional indifference curves seemed appropriate.

      To quote GT:

      “66. Generalization of the Concept of Utility

      If neither monopoly nor monopsony exist, … it is easy to verify that in this case acyclicity does not prevail.

      We have treated the concept of utility in a rather narrow and dogmatic way. We have not only assumed that it is numerical — for which a tolerably good case can be made (cf. 3.3. and 3.5.) — but also that it is substitutable and unrestrictedly transferable between the various players (cf. 2.1.1.). We proceeded in this way for technical reasons: The numerical utilities were needed for the theory of the zero-sum two-person game — particularly because of the role that expectation values had to play in it.

      Thus a modification of our concept of utility — in the nature of a generalization — appears desirable, but at the same time it is clear that definite difficulties must be overcome in order to carry out this program.

      Specifically: It is difficult to see how a definite value can be assigned to a game, unless it is possible for each player to decide in all cases which of the various situations that may arise is preferable from his point of view. This means that individual preference must define a complete ordering of the utilities.

      In the n-person game the characteristic function is defined with the help of the value in various (auxiliary) zero-sum two-person games. …Thus the definition of he characteristic function in an n- person game is technically tied up with the numerical nature of utility in a way from which we cannot at present escape.”

      GT goes on to suggest various generalizations which it seems to me are valuable, but – as far as I can see – GT never claims that these are ‘absolute, unconditional forever’ solutions, as you and Lars seem to suppose. But in my (limited) experience GT here provides a good source of mathematical models with which to refute many odd claims about GT and economics

      But maybe the issue of ‘palatability’ is more important than the technical details? My own notes on GT are at https://djmarsay.wordpress.com/mathematics/maths-subjects/game-theory/von-neumann-morgenstern/ , where I quote 4.6.2 and 4.6.3 which seem to me to clarify the 4.6.1 which you quote.

      It seems to me that if you think that inequality is an inevitable consequence of “the established order of society” then you might re-read GT and get some ideas about what to do about it. Or can you suggest anything more insightful and accessible?

  20. A.J. Sutter
    March 24, 2021 at 6:29 pm

    More about Romer’s math: I was so absorbed a little while ago in fixing my miserable algebra that I missed my real point, which is dimensionality. This makes his math even more problematic.

    We don’t know what the dimension of q is. Let’s call it [zebras], or [Z] for short. Then the dimension of Q is [Z], assuming that N is just a number. (It would be [ZH] if N has the dimension of human beings, though for argument’s sake I’ll give Romer the benefit of the doubt and ignore this interpretation.)

    Then the dimension of p is [Z]^{-a}. But since p = Q^b, for dimensional homogeneity we must have:
    [Z]^{-a} = [Z]^{b}.

    In that case –a = b; and since a ∈ [0,1], the only way to fulfill b ∈ [0,∞] would be for both exponents to be 0 — a pretty useless result.

    By focusing solely on the algebra in my comment at 17:40, I was falling into the trap that is so comfortable for economists, of ignoring the dimensionality of their work. Most variables in economics involve dimensions of money, stuff, human beings or time. Exponents are not cheap. Dimensions need to be accounted for.

    (If I’ve erred again in my math, I will appreciate hearing about it.)

    • March 24, 2021 at 9:10 pm

      This is an important point. Romer seems to think that ‘logically’ everything must be reducible to one dimension. But he hasn’t read GT! (Se my previous.)

      • A.J. Sutter
        March 25, 2021 at 3:34 am

        @Dave Thanks for your comments. A long-ish reply, albeit maybe not so disputatious:

        1. Apropos of Romer:

        Thank you for noticing this point. If you ever wanted an example of how physics and (mainstream) economics look at the world in different ways, dimensional analysis is it.

        Our new academic term starts in a few weeks, and I’ve been preparing some background notes on economics for undergrad students in my sustainability course. I have sections on, among other things, dimensions, stocks/flows, and economic growth. While preparing them, I decided to look at a bunch of textbooks (micro, macro and on growth specifically) to see how they dealt with dimensions on the variables in Cobb-Douglas production functions:

        Y(t) = A(t) • K(t)^{α} • L(t)^{1-α} [*]

        In case you’re not familiar with the convention, Y is output, K is capital, L is labor, A is a productivity factor, and α is a number with value in [0,1]. This is the standard equation used as a starting point for mathematical models of economic growth.

        Among 14 textbooks I had around the house plus landmark papers Solow (1957) and Romer (1990), there isn’t any uniformity about the dimension of output: the majority mentioned that output is a flow, but they differed as to of what. 6 sources clearly said output was goods & services, 3 more were ambiguous but could be construed the same way, 4 said it was a flow of money, and 3 others entirely ignored the question. There was similar variety about the dimensions of the other variables (stuff, people, money, dimensionless or no proposal) — and significantly, no one cites to the original Cobb & Douglas (1928) paper that gave the function its name.

        Inspection of equation [*] suggests you’re going to have homogeneity problems if K and L have differing dimensions, because of their different exponents. Somehow the factor A is going to have to reconcile the discrepancy, and also make sure you wind up with the correct dimension of output Y, so that the two sides of the equation balance.

        E.g., suppose α = 0.3, and [Y], [K] and [L] are [money], [money] and [person・T], respectively. The capital-labor product

        [money]^{0.3} [person・T]^{0.7}

        has to be multiplied by a factor

        [A] = [money]^{0.7} [person・T]^{-0.7}

        to give the right dimension for output. This is feasible so long as α never changes. But if α is α(t) you’ll have problems, because to assure dimensional homogeneity A(t) obviously must vary with α(t). The problem is that you generally won’t be able to compare values of A at different times, because they won’t have comparable dimensionality: e.g., [money]^{0.7} [person・T]^{-0.7} can’t be compared with [money]^{0.65} [person・T]^{-0.65}. No author seems to be aware of this problem. For evidence that α is indeed α(t), see the data in Chapter 6 of Thomas Piketty’s Capital in the Twenty-First Century (2013 French/2014 English).

        What’s the relevance of the 1928 paper? Cobb and Douglas didn’t have these dimensional issues at all. All the variables in their equation were dimensionless, each of [Y], [K] and [L] having been normalized to its respective value in the year 1899. In that set-up, A (they called it ‘b’) can simply be a scalar. They also supposed that α (their ‘1-k’) was constant, with value 0.25 for the capital share.

        As with GT, sometimes it’s worth looking at the original source — that would have saved many textbooks from spouting foolishness (or at least, it would have reduced the amount spouted).

        2. Apropos of GT:

        Certainly it’s possible to look at the same theory from different perspectives. In August 2005 I was writing a column about negotiation for a Japanese business magazine. (I am a lawyer by trade, with a practice something like a solicitor’s, so negotiation was my bread and butter at the time.) I wanted to write a column about how GT is useless for real-life negotiation, which has certainly been my experience. On very short notice, Prof. Robert Aumann kindly agreed to an interview by telephone; incredibly, he spent 3 hours on the phone with me. One of the things he said at the outset is that GT would be useful only where the problem and solution are “sharply presented,” such as in auctions and electoral contests.

        A part of the interview that I recall because I wrote about it (I just rediscovered the audio files today, and haven’t had a chance to re-listen to them yet) was that I asked him how he regards GT. He said that he regarded it as a scientist; that he is trying to model human behavior, and adjust the theory to make it more empirically accurate. I then proposed the following allegory:

        Two researchers make mathematical models of the color of the sky. In both cases, the model predicts green. Each researcher looks outside and see that the sky is blue. One, a scientist, says, “OK, I’ve got to fix my model.” The other says “The sky is sub-optimal!”

        I asked Prof. Aumann which was his approach to GT, and he confirmed it was the former. (I believe that during our conversation I cast the second researcher as a management consultant, rather than an economist, out of deference to my interlocutor.) So clearly one’s purposes will affect how one reads a theory.

        About vN&M specifically, the fact that they understood the limitations of their efforts, as you point out, doesn’t imply (i) that those limitations can be overcome, nor (ii) even if the limitations were mitigated, that the theory would be a good one to use for trying to reach a better and more equitable society. Clearly, a society isn’t a “sharply presented” problem for solution.

        And as I’ve mentioned earlier, I believe politics must be taken into account. The reason a particular set of imputations, and not a different one, is available is because of power (though I don’t mean for my use of GT lingo in this comment to be an endorsement of GT). Personally, I think the data and interpretation of Piketty (2013/2014) are more useful than GT for informing practical change in the future, notwithstanding that his book is largely structured as a quest for data for various parameters of the neoclassical growth model (as in the Cobb-Douglas model above).

      • March 25, 2021 at 12:40 pm

        @Andrew. I’m still feeling a bit hurried here, and relieved that you haven’t mistaken brevity for curtness.
        0. In the short-run, disputation is bad. But in the long-run, the avoidance of ‘disputation’ can be even worse. Hopefully we are being appropriate to the time and place.
        1. “No author seems to be aware of this problem.” What about Keynes, 1921? I would rather say that I am not aware of any text that succeeds in getting this point across .My own view is that Shakespeare, for example, was well aware of it and I sympathize with those who feel that anyone who doesn’t ‘get it’ must be acing in bad faith, and hence that you must be a wicked troll am I am wasting my time. But in my youth I was advised that “As with GT, sometimes it’s worth looking at the original source — that would have saved many textbooks from spouting foolishness (or at least, it would have reduced the amount spouted).”
        You have been kind enough to share a few insightful stories. Here’s one of mine: In the late 90s I was asked to look into the logical basis (or otherwise!) of arguments around financial stability, so ordered Keynes 1921 from the British Library (It was out of print) only to be told that they didn’t have a lending copy as the last person to borrow it hadn’t returned it. That person turned out to be me, when I was looking the logic of discussions around the stability of the Cold War. So not many had ‘sought the source’. Or maybe everyone who mattered had their own copy?
        Another relevance to this story is that it was only later that I realised that the 1921 Keynes was the self-same person as the well-known economist. Who’d have thought it?
        2. I don’t know Aumann. Your story seems consistent with my view that anyone who understands ‘proper’ science will not be deluded by ‘mathiness’.
        Aumann is interested in describing actual economies. I must admit to being more interested in how they might be, e.g. more humane and sustainable. My interest in ‘how things are’ is largely driven by a concern as to how we might change things.
        More importantly, you say:
        “About vN&M specifically, the fact that they understood the limitations of their efforts, as you point out, doesn’t imply (i) that those limitations can be overcome, nor (ii) even if the limitations were mitigated, that the theory would be a good one to use for trying to reach a better and more equitable society. Clearly, a society isn’t a “sharply presented” problem for solution.”
        Can I just say, for now, that I think there is a critical difference in how we read such texts. I have run out of quotes for you. I have no idea how to enlighten you via the written word (or how you might ‘enlighten’ me). Yet I’m unhappy at the thought that we might simply ‘agree to disagree’. But at least its sunny out and lockdown is easing.

        Best wishes.

      • A.J. Sutter
        March 25, 2021 at 1:02 pm

        @Dave Not to worry about brevity, lack of quotes, etc.

        (1) As for “no author”: I meant of those that I consulted — not a universal. I admit, unclear as written.

        (2) Aumann is an Israeli mathematician who got the prize no one on this blog is allowed to call a Nobel (because it’s awarded by the Bank of Sweden; and I’m being facetious to say no one is allowed …) for his work in GT; he was awarded it just a few weeks after my phone call with him (lucky coincidence, no causal connection suggested …).

        (3) I too am interested in how things might be, which is one reason I got involved with the topic of degrowth, to encourage governments and companies to abandon prioritizing GDP growth as a matter of policy (at least in Japan, where I live), and to revitalize rural areas like the one I live in; and also in constitutional & electoral system reform (again, starting in Japan, since I live here).

        Thanks for engaging with these topics, and enjoy the sun!

    • March 28, 2021 at 4:19 pm

      There are so many posts (and even long posts). It was not easy or me to follow A.J. Sutter and Dave Marsey’s arguments. I have only followed point 3 concerning Romer and Cobb-Douglas production function.

      Production function and total factor productivity are the main areas I am concerned with, but in a negative way. I mean I am thinking that all production functions (and related concepts) are problematic. It is only parables with inputs, outputs, and others. I am doubtful of total factor productivity (TFP) and I do not use such term as my own concept. However, as it is widely used in growth and development theory, I have to learn more about its nature. I want to ask Andrew and Marsey if you have any specific ideas about production function and TFP.

      But, before that, I have two related questions that came to my mind when reading Sutter’s dimensional analysis. Let me add some explanation as background of these questions. Mathematicians normally do not care if a function passes dimensional analysis. On the other hand, physicists ask first of all whether a function or an equation passes dimensional analysis. There must be some deep reason why mathematicians and physicists differ so much on this point.

      Question 1. What is the deep reason of why physicist care so much on dimensions.

      For mathematicians a function f : S1×S2× … ×SM -> T can be defined anyway if for any point (x1, x2, … , xM) in a set S1×S2× … ×SM a value f(x1, x2, … , xM)T. Mathematicians do not care whether x1, x2, … , xM have the same dimension. When they are written in an additional form a1 x1 + a2 x2 + … + aM xM, we may care if x1, x2, … , xM have the same dimension or are of the same kind of quantities. But, we do not care to write a polynomial a1 x^M + a2 x-(M-1) + … + aM without worrying how each term can be arranged to have the same dimension.

      So, a question to physicists is why do you worry so much that both members of an equation have the same dimension. Does it mean that it is necessary in order that the equation has some status of a law? For example, are you worrying that it does not express a law that must lie in the nature that must be independent of our choice of units for variables?

      Question 2. Why are all physical entities expressed as monomial?

      Monomials are an expression that is composed of several variables combined only by multiplications or divisions.

      Is there any physical entity that is not monomial type? If all physical entities have monomial dimensions, what are the reasons for it? Is it also related to the reason above that physical quantity must have some invariant property that is conserved by e.g. change of units or other reasons?

      * How are subscripts and superscripts are expressed in this page?
      Do the following tags work?
      x1 x1

      • Meta Capitalism
        March 28, 2021 at 5:00 pm

        Some thoughts on why dimensional analysis is important and its limitations. It is sad to see otherwise great minds become slaves to their tools.

      • March 28, 2021 at 8:32 pm

        Interesting questions from Yoshinori. And well done to Lars for opening this can of worms. Who’d have guessed quite how muddled we all are.

        0. I’m more familiar with ‘production functions’ in a very different field. My general view of economics is that it clearly matters and I only wish I could meaningfully contribute in the way you would like. But first, I suggest that economists need to understand mathematics better, rather than rely on mathematicians to understand them. (Although I am trying!)

        1. Your comment about dimensional analysis somewhat shocks me.
        a) A quick tip from school was to always check for dimensionality whenever you had derived an equation.
        b) My tutor for my first degree explained everything in terms of category theory, which is a kind of ‘dimensionality on steroids’. Anything that he couldn’t so explain he dismissed as pseudo-maths.(Category is essentially a logical development from Keynes’ maths.)
        c) I’ve got to be a C.Math FIMA (which is quite ‘posh’) without
        coming across any mathematicians who don’t get this.
        d) My experience in working with engineers and other practical folk is that whenever I point out a dimensionality blunder it turns out that there is some unit-dependent constant in there, and my ‘terms of engagement’ are that we have to convert it into something ‘dimensionless’, or else I get terribly confused.
        e) My (limited) experience of economics is that weird equations are justified by hand-wavy arguments that I find unconvincing.
        f) When I left uni I could have earned a lot more if I gone into finance, but I couldn’t get my head around their practices.
        2. They aren’t. But this is an excellent insight. I could refer you to von Neumann but he’s a difficult read. I have some hopes of being able to make things clearer one day, but first I need to understand where my audience is ‘coming from’. Which I don’t. (But I’m getting more of an idea from you, Andrew and Lars – thanks for your patience.)

        P.S. Some html seems to work, but not all. Anyone have a guide?

      • March 28, 2021 at 8:46 pm

        Meta Capitalism is interesting. “Physical (material) things have quantitative relationships that are measurable.” I can quite see that people who think this of physics would think what they seem to think of economics.

        “So the legitimate questions arise when confronted with human social systems—such as economics is by its very nature—to what extent can mathematical models capture the true underlying causes of changes in economic behaviour?”

        I refer you to von Neumann. But see my response Yoshinori.

        My main point is that there’s more to mathematics and mathematical modelling than economists appreciate. My own view is that the choice of tools ‘should’ be the prime responsibility of subject matter experts (e.g., economists). You certainly can’t rely on mathematicians. But maybe we could somehow build bridges?

        (Technically, I find game theory and cybernetics both very fruitful, but I approach them from mathematics, not social sciences. It makes a difference!)

      • Meta Capitalism
        March 29, 2021 at 1:57 am

        I refer you to von Neumann … My main point is that there’s more to mathematics and mathematical modelling than economists appreciate. My own view is that the choice of tools ‘should’ be the prime responsibility of subject matter experts (e.g., economists). ~ Dave Marsay

        Thanks Dave, I have reviewed your exchange with Andy, and find Andy’s comments penetratingly insightful in how they cut through the molasses and get to the core of the issue in my view:

        About vN&M specifically, the fact that they understood the limitations of their efforts, as you point out, doesn’t imply (i) that those limitations can be overcome, nor (ii) even if the limitations were mitigated, that the theory would be a good one to use for trying to reach a better and more equitable society. Clearly, a society isn’t a “sharply presented” problem for solution.

        .
        I suspect you are very right regarding mathematics and economists. I think mathematics is the language of science and to the extent that it is the yardstick by which we measure the material universe of things and beings (to the extent they can really be measured) it is important to build those bridges. But not all of human reality is quantifiable and not all that we know is that is valuable to understanding our place in the universe and social reality, including economics which is first and foremost rooted in cultural and political context full of human value judgments that are not reducible to “social mathematics” despite the machine dreams of some.

        I don’t think we can leave economics solely to economists or mathematicians or politicians, etc. I am with those who called into question the Econocracy. The power of mathematics is undeniable, but it is also undeniable that “propaganda” and “dangerous ideology” can be hidden inside a mathematical wrappers to obfuscate more than illuminate.

        THE ECONOCRACY

        ‘Economics has become the organising principle, the reigning ideology, and even the new religion of our time. And this body of knowledge is controlled by a selective priesthood trained in a very particular type of economics – that is, neoclassical economics. In this penetrating analysis, the authors show how the rule by this priesthood and its disciples is strangling our economies and societies and how we can change this situation’ — Ha-Joon Chang

        ‘An interesting and highly pertinent book’ Noam Chomsky

        ‘This book is badly needed, looking at academic economics afresh: clear, well-written, well-researched, non-doctrinaire. It makes the case for “pluralistic” economics to address such questions as financial instability and climate change. Every economist and citizen should get a copy’ — Vince Cable

        ‘A rousing wake-up call to the economics profession to rethink its mission in society, from a collective of dissident graduate students. Their double argument is that the “econocracy” of economists and economic institutions which has taken charge of our future is not fit for purpose, and, in any case, it contradicts the idea of democratic control. So the problem has to be tackled at both ends: creating a different kind of economics, and restoring the accountability of the experts to the citizens. The huge nature of the challenge does not daunt this enterprising group, whose technically assured, well-argued and informative book must be read as a manifesto of what they hope will grow into a new social reform movement’ — Robert Skidelsky

        ‘If war is too important to be left to the generals, so is the economy too important to be left to narrowly trained economists. Yet, as this book shows, such economists are precisely what we are getting from our leading universities. Given the role economists play in our society, we need them to be much more than adepts in manipulating equations based on unrealistic assumptions. This book demonstrates just why that matters and offers thought-provoking ideas on how to go about it’ — Martin Wolf, Chief Economics Commentator at the Financial Times

        ‘Economics, as practised in university economics departments, regurgitated by policy makers, and summarised in the mainstream media, has become a form of propaganda. This superb book explains how dangerous ideology is hidden inside a mathematical wrapper … essential reading for anyone who wants to know about the con – that includes everyone concerned with the future of democracy’ — Jonathan Aldred, author of The Sceptical Economist

        ‘This book is for the many students who want to study economics because they want to help society solve its problems: a critical introduction to contemporary economics, written by a new, post-2008 generation of economists … There is no better vaccination against the economic disease than this immensely readable book’ — Wolfgang Streeck, author of Buying Time: The Delayed Crisis of Democratic Capitalism

        ‘Is economics too important to be left to the economists? The authors marshal a powerful case … An important and timely book’ — Andrew Gamble, Professor of Politics at the University of Cambridge

        https://a.co/4PL5c1q

      • Meta Capitalism
        March 29, 2021 at 2:30 am

        My own view is that the choice of tools ‘should’ be the prime responsibility of subject matter experts (e.g., economists). You certainly can’t rely on mathematicians. But maybe we could somehow build bridges? ~ Dave Marsay

        .
        It seems the foundations upon which one builds bridges is an area that mathematicians might be of some help in differentiating sense from nonsense. It does little good to build bridges when their footings are built upon quicksand. If Roi, a mathematician, can shed light on sense and nonsense then surely other mathematicians can as well.

      • March 29, 2021 at 11:53 am

        Meta Capitalism. It seems to me there is no shortage of books and worked examples by mathematicians that shed a searing light on much of the nonsenses that we are suffering from. To me, the problem is that much of the population seems to view things ‘through a glass darkly’. But then again, I’ve often been told I must be stupid for not seeing what others see so plainly.

        Some people think that current events will open people’s eyes and minds. I’m struggling to open mine but tend to think that mathematics isn’t the main problem here: surely science would inoculate people against any tendency to mathism? (They certainly seem to be trying. Or maybe you don’t see it that way?)

      • March 29, 2021 at 12:57 pm

        Meta Capitalism: I have just realised I responded to your latest without having responded to your earlier, which may confuse a casual reader of this blog. Ooops!

        Anyway, you say “mathematics is the language of science and to the extent that it is the yardstick by which we measure the material universe of things and beings (to the extent they can really be measured)”. Looking at https://en.wikipedia.org/wiki/Measure_(mathematics)#Non-measurable_sets I see we may have a problem. As Keynes points out, the ‘things’ of economics are fictions, and the ‘variables’ aren’t actually measureable. But does this matter?

        Von Neumann argues that if you are prepared to accept the usual notion of indifference curves then you may as well assume that there are fundamental ‘things’ of interest (such as ‘the inflation rate’) that are measurable. Some of the discussion on dimensionality touches on this, but it seems to me a bit of a red herring. Surely the thing to note, as vnm does, that the assumption about indifference curves is a very strong one. I read Keynes as saying much the same, although it seems to me that he may not have realised the full significance of this until around 1916, and that he didn’t quite edit his Treatise thoroughly enough on this point. (But read as mathematics, it should be clear enough.)

        I could bang on about the axiom of choice versus the axiom of determinacy, but I’m not sure where to start. Most mathematicians are prepared to work with either, relying on the judgement of those who ‘should’ understand these things. But is this wise? (Genuine question!)

  21. March 25, 2021 at 6:13 pm

    For these two or three days, I was reading about Leon Walras. Of course, my survey is not complete nor sufficient, but I found something astonishing at least for me. Leon Walras was a Platonist.

    This much is certain, however that the physico-mathematical sciences, like the mathematical sciences, in a narrow sense, do go beyond experience as soon as they have drawn their type concepts from it. From real-type concepts, these sciences abstract ideal-type concepts which they define, and then on the basis of these definitions they construct a priori the whole framework of the theorems and proofs. After that they go back to experience not to confirm but to apply their conclusions. (Walras 1874, 53) (Cited from Ludovic Ragni 2018 “Applying mathematics to economics accordding to Cournot and Walras” p.90)

    Wlars himslef was aware of his philosphical stance:

    A truth long ago made clear by the Platonic philosophy is that science does not study corporeal entities but universals of which those entities are the manifestations. Corporeal entities come and go; but universals remain forever. Universals, their relations, and their laws, are the subject of all scientific study (Walras, 1954, p. 61). (Cited from Eleuterio Prado 2020 “Walras in the light of Marx, Lacan and his own” p.437)

    As a consquence, Walras was extremely isolated. Methodologicaly, Cournot, Pareto and Dupuit could not agree with Walras’s idealism.

    I have no textual evidence but it is probable that Debreu was deeply influenced by Leon Walras himself, not through Bourbaki as Weintraub and Moczard claims. In my opinion, Bourbaki is not such idealist who dare argues that mathematics has something to say something directly about real entities like physical bodies or economies.

    • March 26, 2021 at 2:10 am

      Sorry! I made an error in writing a start tag. I made many other errors in spelling. The more correct text is the following:

      For these two or three days, I was reading about Leon Walras. Of course, my survey is not complete nor sufficient, but I found something astonishing at least for me. Leon Walras was a Platonist.

      This much is certain, however that the physico-mathematical sciences, like the mathematical sciences, in a narrow sense, do go beyond experience as soon as they have drawn their type concepts from it. From real-type concepts, these sciences abstract ideal-type concepts which they define, and then on the basis of these definitions they construct a priori the whole framework of the theorems and proofs. After that they go back to experience not to confirm but to apply their conclusions. (Walras 1874, 53) (Cited from Ludovic Ragni 2018 “Applying mathematics to economics according to Cournot and Walras” p.90)

      Walras himself was aware of his philosophical stance:

      A truth long ago made clear by the Platonic philosophy is that science does not study corporeal entities but universals of which those entities are the manifestations. Corporeal entities come and go; but universals remain forever. Universals, their relations, and their laws, are the subject of all scientific study (Walras, 1954, p. 61). (Cited from Eleuterio Prado 2020 “Walras in the light of Marx, Lacan and his own” p.437)

      As a consequence, Walras was extremely isolated. Methodologically, Cournot, Pareto and Dupuit could not agree with Walras’s idealism.

      I have no textual evidence but it is probable that Debreu was deeply influenced by Leon Walras himself, not through Bourbaki as Weintraub and Moczard claims. In my opinion, Bourbaki is not such idealist who dare argues that mathematics has something to say directly about real entities like physical bodies or economies.

  22. A.J. Sutter
    March 26, 2021 at 9:56 am

    Yoshinori, you might take a look at the following article:

    Cot, Anne & Lallement, Jérôme (2006)
    « 1859-1959 : DE WALRAS À DEBREU, UN SIÈCLE D’ÉQUILIBRE GÉNÉRAL »
    Presses de Sciences Po | « Revue économique »
    [www.] cairn [dot] info [slash] revue-economique-2006-3-page-377 [.htm]

    This suggests that both Walras and Bourbaki were influences on Debreu.

    I have a couple of reservations about the article above, though. First, it cites three articles by Debreu as evidence of Bourbaki’s influence, but only one of them explicitly mentions Bourbaki (Debreu’s 1983 Nobel lecture “Economic Theory in the Mathematical Mode,” re-published in several places in 1984). The other two are:
    • 1986: “Theoretic Models: Mathematical Form and Economic Content,” Econometrica, 54 (6), p. 1259-1270
    • 1991: “The Mathematization of Economic Theory,” American Economic Review, 81 (1), p. 1-7
    The 1986 paper opens with some mention of Walras, as does the famous 1954 Arrow & Debreu Econometrica paper. The 1991 paper (a presidential address to the AEA) is the source of the SSC quote, BTW. This one is quite pertinent to your comments about Debreu’s idealism.

    My other reservation about Cot & Lallement is their sloppy reference to a 1952 paper by Debreu. The title isn’t given in the main text, and the paper is omitted from the bibliography. Cot & Lallement claim that this paper includes a figure analogous to the Walrasian auctioneer (comissaire priseur). However, I could find only one paper from Debreu in 1952, “A Social Equilibrium Existence Theorem” (PNAS 38, 886-893); I couldn’t find any agent analogous to an auctioneer in it. But possibly I missed something buried in the math (the paper is categorized by PNAS as “Mathematics,” BTW).

    Getting back to your thesis, one other point that may be relevant is that both Bourbaki and Walras appear in the bibliography of Theory of Value (I verified this).

    I hope this helps.

  23. March 26, 2021 at 5:20 pm

    Thank you, Andy. As I could get the PDF of Cot & Lallement (2006), I read it before I read your comments. In this way, I believe I could get an independent estimate on their paper.

    My impression is very similar to yours. They talk about influence on Debreu of Bourbaki, but the name Bourbaki appears only once in a footnote 2 in page 385. The word bourbakisme appears only once in the next page. There is no explicit quotes from Debreu himself. I have got access to only one, that of Presidential Adress (1991). I could not see two others. By your information I found that the paper (1952) is included as Chapter 2 of Debreu’ Mathematical Economics (1986). I gave a quick look over it and found that Bourbaki is indeed in the References. It was cited at the end of Section 1 with reference to the complete real line. This is only a technical note and we cannot say it proves an influence of Bourbaki. Anyone cite the relevant paper or book that is most popular at the time of writing.

    Cot and Lallement follow Weintraub (1985), although they had diluted the latter’s claim. It is doubtful whether they had examined by themselves how Bourbaki influenced Debreu.

    Any way, I must express my thanks to you. It was a great discovery for me that Walras and Debreu were such an idealist. This poses a very interesting question. I am thinking to pose a question in ReserachGate if we can find someone who can give us good information on this point.

  24. A.J. Sutter
    March 29, 2021 at 10:41 am

    @Yoshinori, regarding dimensions and monomials (March 28, 2021 at 4:19 pm):

    Dave has already given some good answers. Here are my embellishments. The essence is before the comments, if this response is too long:

    Q1: Dimensions are needed because as used in physics, numbers usually count things. Balancing dimensions is essentially the principle that you have the same number of things on each side of the equation. (That’s the essence of an equation.) More precisely, dimensions keep track of the kinds of things on each side of the equation.

    Units handle the scalars attached to the dimensions: the rules about them are more context-dependent. E.g., the units need not be homogeneous if you’re stating a conversion factor: 1 inch = 2.54 cm, 32°F = 0°C, 1 BTU = 1055.06 joule, $37.98 = ¥4,164, etc.

    The kids in my elementary school had an intuitive understanding of the need to compare likes with likes. The older students used to tease the younger ones with questions like “Would you rather walk to school, or carry your lunch?” and “Would you rather go to New York, or by bus?” Then they’d laugh their heads off at the looks of befuddlement from the younger kids.

    So one could say that the purpose of dimensional homogeneity is to make sure an equation compares likes to likes. However, although one might say that dimensional homogeneity has its origins in realism, i.e., counting things, in principle there isn’t anything impermissible about dimensions that may be difficult to understand in terms of everyday reality, e.g. [L]^{0.5} or [L]^{-2}

    Q2: Physics doesn’t rely only on monomials at all. For example, there are plenty of Taylor series expansions, especially once you start talking about the physics of fields, both classical and quantum. Another example is Fourier series expansions, which have many applications in classical physics. See, e.g., Körner, T. W. (1988) Fourier Analysis.. Cambridge: Cambridge University Press.

    Q3: In HTML you should be able to use the tags ‘sub’ and ‘/sub’ for starting and stopping subscripts, and ‘sup’ and ‘/sup’ for superscripts, with the usual . I’ll try that below: it worked when I tried it in my browser, though since this blog lacks a preview function, I’m taking a chance. (Though if you see garbage, please view with Unicode UTF-8 encoding, probably accessible with your browser’s “View\Text Encoding” pop-down menu). To hedge your bets, you can use the LaTeX style of underscore for subscript and caret for superscript, as I did above: it’s ugly, but people will figure it out.

    COMMENTS:

    A. DIMENSIONS AND SERIES EXPANSIONS

    The use of series expansions in physics may seem to raise problems with dimensionality, since an infinite series is a sum that includes inhomogeneous degrees [a0 + a1x1 + a2x2 + a3x3 + …]. This is a topic that confuses people perennially (e.g., as in some past versions of Wikipedia: see [2], but it’s not the problem that people suspect.

    First, consider series expansions of transcendental functions, such as exponentials, logarithms and trig functions. In such a case, the dimensioned quantities should be outside the argument of the function: see [1: pp. 42-46], and also [2], with the elegant maxim, “dimensions must be left at the doorstep of transcendental functions.”

    Second, consider Taylor’s series expansions, which are polynomial, rather than transcendental. Then general form is

    f (x) = f (a) + ∑n = 0→∞ (1/n!)・f (n)(x-a)・(x – a)n,

    where f (n) means the (n)th derivative of f with respect to x, d(n) / dxn

    Obviously there isn’t any problem if the argument of the function is dimensionless, because we can move the dimensioned part of the function outside the series expansion. Suppose instead f (x) has some dimension, call it [f]. Let’s consider the first few terms of the expansion:

    f (x) = f (a) + (x – a)・df (a)/dx + (x – a)2・d2f (a)/dx2 + …

    Then dimensionally we get

    [f] = [f] + [x]・([f]/[x]) + [x]2・([f]/[x]2) + …

    That is, each term of the expansion has dimension [f], because the expressions in powers of [x] cancel in each term. See generally [3].

    B. CONFUSION ABOUT DIMENSIONS IN ECONOMICS LITERATURE

    The peer-reviewed 21st Century economics literature has many examples of confusion about dimensional analysis([4]-[9]).

    For example, the author of [4], irate about Cobb-Douglas production functions, written as

    Q = AKαLβ

    seems to believe that dimensions of economic quantities must have exponents identically 1 to make sense (@96).

    The authors of [5], purporting to correct him, make the fallacious argument that “Different values of α and β change only the magnitude of A” (@50; emphasis in the original). The authors of neither article bother to check how Cobb & Douglas did things.

    The authors of [6] are troubled by the idea that a quantity with dimension [money] could be raised to the 2nd or 3rd power (@1605 Fig. 1; the idea that this is “nonsensical” is reiterated in [9] @13). Generally, though, they argue correctly that the arguments of logarithms and exponentials should be dimensionless, and overall argue in favor of dimensional homogeneity in economics, claiming that “economists concerned with the biophysical and monetary aspects of ecological and economic interactions must understand the importance of dimensional homogeneity.”

    Baiocchi [7], rebutting [6], regards the claim quoted above as “extraordinary.” He cites two sources from that claim dimensional homogeneity is unnecessary in certain physical theories (non-Newtonian fluids, special relativity and quantum mechanics) (@8) but doesn’t provide any examples. I’m not aware of any such examples, nor could I find any after searching, and his statement that “the so called non-Newtonian fluids include constants that vary from substance to substance” isn’t necessarily relevant to the topic of dimensionality. In special relativity, the dimensions of space and time are asserted to be equivalent, as are mass and energy: but even “E = mc2″ is dimensionally homogeneous.

    Baiocchi concludes by proposing that “To apply dimensional methods successfully to ecological economics, the traditional perspective of physics needs to be modified to acknowledge fundamental methodological differences in all the relevant contributing disciplines,” and that the “misguided and narrow application of the dimensional homogeneity principle [advocated in [6]] would actually undermine applied and interdisciplinary research.” (Id.)

    The authors of [8] argue that there isn’t any dimensional problem with production functions, but beg the question with their definition: “Definition 1. A production function is a a mapping F : ℝ2+ → ℝ+” (@11)(i.e., from the 2-dimensional space of positive real nuymbers to the one-dimensional space of positive reals). Obviously there won’t be a dimensional issue if the production function is defined, as here, as a matter of pure math. But I haven’t any leading economics textbooks and articles with this sort of definition, and many with attributions of physical units to one or more variables in the function.

    C. EMPIRICAL RELATIONS & HAPPY SPRING

    Units can sometimes matter: in some fields, such as electricity and magnetism, textbook authors are divided about whether to use MKS (meter-kilogram-second) units or CGS (centimeter-gram-second) units. Physical constants will obviously have different scalar values depending on which is used.

    Also, there may be certain empirical relationships that depend on units used. Baiocchi [7] claims that such relationships are further examples of accepted breaches of dimensional homogeneity. This assertion is wrong, in that even if the relationships are written without regard to dimensions, there needn’t be any inhomogeneity. A seasonally pertinent example is Dolbear’s law, which relates cricket chirps to temperature. Often these are expressed without explicit dimensions: e.g. [10],

    Field Cricket: T = 50+[(N-40)/4]

    Snowy Tree Cricket: T = 50 + [(N – 92)/4.7]

    Katydid: T = 60 + [(N – 19)/3]

    The source does mention that T is in °F, and N is chirps per minute. But we can write this in a homogenous fashion as follows, e.g. for the field cricket:

    T [°F] = 50°F + {(N-40)[chirps・minute-1・F°] / 4 [chirps・minute-1]},

    or more simply

    [ ° ] = [ ° ] + {[chirp][T]-1[ ° ] ÷ [chirp][T]-1}

    Note that a formula with different numerical values would be used for °C, but with the same dimensional attributions to the terms.

    I’m exhausted with all this HTML coding — time to enjoy the warm weather!

    REFERENCES:

    [1] Bridgman, P. W. (1922) Dimensional Analysis. New Haven: Yale University Press.

    [2] Matta et al. (2011) “Can One Take the Logarithm or the Sine of a Dimensioned Quantity or a Unit? Dimensional Analysis Involving Transcendental Functions,” Journal of Chemical Education 88(1), 67-70.

    [3] Berberan-Santos, M.N. & Pogliani, L. (1999) “Two alternative derivations of Bridgman’s theorem,” Journal of Mathematical Chemistry 26, 255–261

    [4] Barnett, W., II (2004) “Dimensions and Economics: Some Problems,” The Quarterly Journal of Austrian Economics, 7(1), 95-104

    [5] Folsom, R.N. & Alejo Gonzalez, R. (2005) “Dimensions and Economics: Some Answers,” The Quarterly Journal of Austrian Economics, 8(4), 45-65

    [6] Mayumi, K. & Giampetro, M. (2010) “Dimensions and logarithmic function in economics: A short critical analysis,” Ecological Economics 69, 1604–1609

    [7] Baiocchi, G. (2012) “On dimensions of ecological economics,” Ecological Economics 75, 1-9

    [8] Chilarescu, C. & Viasu, I. (2012) “Dimensions and logarithmic function in economics: A comment,” Ecological Economics 75, 10-11

    [9] Mayumi, K. & Giampetro, M. (2012) “Response to ‘Dimensions and logarithmic function in economics: A comment,'” Ecological Economics 75, 12-14

    [10] University of Nebraska-Lincoln Institute of Agriculture and Natural Resources Department of Entomology (2021) “Crickets and Temperature,” https://entomology.unl.edu/k12/crickets/temperature.htm

    • A.J. Sutter
      March 29, 2021 at 11:10 am

      OK, even after I pre-tested everything in my browser, clearly WordPress is ignoring a lot of HTML encoding — Yoshinori, I may have reproduced your negative result!

      I think most of the math is intelligible (e.g., exponents are written in-line), but I’ll redo some of it in pseudo LaTeX, just to be sure:

      A. In the preamble paragraph, the numbers on the as are subscripts, and on the xs are exponents.

      The general form of Taylor’s expansion is f (x) = f (a) + \sum{n = 0}^{∞} (1/n!)・f ^{(n)}(x-a)・(x – a)^n,
      where f ^{(n)} means the (n)th derivative of f with respect to x, d^(n) f / dx^n

      B. In [4], the Cobb-Douglas is written as Q = AK^{α}L^{β}.

      And in [8], production functions are defined as a mapping F : ℝ_{+}^{2} → ℝ_{+}.

      C. In the cricket discussion, {-1} is an exponent in the dimensional equations.

    • March 29, 2021 at 11:38 am

      I have a maths blog in which I use a minimum of equations. Initially this was because I found that many readers were seeing things differently from me, as in your ‘E=mc2’, which would give me every opportunity to rubbish what you say, should I happen to disagree with it. But I now fear that even when the mathematics is very clear and fully referenced etc in accord with ‘best practice’ there are too many people who can read what you say in ‘good faith’ and completely misunderstand it. Its a bit like going to a Shakespeare play, where different people can come away with completely different views of it.

      My own experience is like yours: that young kids often have an intuitive grasp of these things that many ‘educated’ graduates seem to lack. But why? (And is there any correlation with the impact of Covid-19 on different cultures, and if so why?)

      • A.J. Sutter
        March 29, 2021 at 4:02 pm

        @Dave ‘E=mc^2,’ of course. The context in which I mentioned it was the assertion of ref. [7] above, who cited someone who said that special relativity tolerated dimensional inhomogeneities. The equation, the most famous result of special relativity, is something of a counterexample to Baiocchi’s claim, because it’s dimensionally balanced. Obviously, that’s not sufficient to disprove his claim, but he doesn’t give any examples in support of it, and I’m not aware of any.

        Sorry, I’m not sure who is understanding things differently from whom. Could you please be more specific? Did I misunderstand something? Did you find something I said hard to understand, or understand it in a different way from the interpretation I gave? Are you referring instead to the idiosyncratic interpretations of dimensions by the authors I cited? Or are you referring to your own experience with readers of your blog? I am genuinely unclear about what you mean. (So quick bright things come to confusion — and being neither, all the more so do I.)

        BTW, Terence Tao has an interesting blog entry from 2012 entitled “A mathematical formulation of dimensional analysis,” albeit at a considerably more sophisticated level than this thread. But it may serve to show that some mathemataicians think there is some virtue in keeping dimensions straight. https://terrytao.wordpress.com/2012/12/29/a-mathematical-formalisation-of-dimensional-analysis/

      • March 29, 2021 at 6:30 pm

        Andrew, I am honestly trying to make myself as clear as I can, whilst avoiding seeming too clear where I am not myself clear.

        The Terry Tao reference is interesting, as it postdates my own thinking.

        TT says ” the Greeks used geometric operations as substitutes for the arithmetic operations that would be more familiar to modern mathematicians.” Is this not strikingly at odds with what I have been arguing? He’s clearly a much better mathematician than me, so why worry about what I think?

        But how do people read this?

        My own reading is that TT claims that even as recently as 2012 ‘arithmetic operations’ were ‘more familiar’ to mathematicians. If we accept this (and why not?) then we might echo Lars in saying that:

        ‘If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to familiar math or logic, not science.

        But this largely echoes Keynes and von Neumann. My point is ‘what about the unfamiliar mathematics?’ How to proceed?

        Interestingly, from about 2018 the observation that “the Greeks used geometric operations as substitutes for the arithmetic operations that would be more familiar to modern mathematicians” seems most insightful. These days arithmetic is seen as a special case of https://en.wikipedia.org/wiki/Algebraic_geometry , which in turn is often seen within the ‘frame’ of https://en.wikipedia.org/wiki/Category_theory . Consequently, I think it fair to say that ‘geometry’ is now a much more familiar term and that maybe the term ‘categorical’ will come to take on a greater significance. It is perhaps unfortunate that some ‘practical people’ find it ‘useful’ to reason without taking account ‘contingency and uncertainty’, but Keynes and von Neumann, for example, seem to me to distinguish between short term ‘utility’ and long term sustainability. Admittedly, in their day the long term didn’t seem so pressing as it may do now. But if we take Lars’ concerns seriously, maybe we should move on from what was once familiar and embrace the new? (Or at least, acknowledge the limitations of the old.)

        (Actually, I have to say that my own sense of space and time is pretty unreliable, and I keep finding myself falling back on ‘arithmetical’ thinking. But I find it better to be? appear? uncertain and contingent in this regard than to always ‘shut up and calculate’, however ‘quickly’ and ‘brightly’. ;-) )

  25. March 29, 2021 at 4:27 pm

    Thank you, Andrew and Dave. I have understood almost everythig but one point. As for my Question 2, Andrew may have mistaken to a different question. Please check it again:
    https://rwer.wordpress.com/2021/03/14/on-the-use-of-logic-and-mathematics-in-economics-2/#comment-179432

    I am not asking whether physics use polynomials or polynomial series. I am wondering why almost all physics entities have a dimension as a monomial of basic three dimensions i.e. [K], [L], and [T].

    For example, the energy has the dimension [K][L]^2/[T]^2 and the force [K][L]/[T]^2. I have never seen a unit that can be expressed, for a pure example, as [L]+[L].

    In mathematics, addition, subtraction, multiplication, and division are four binary operators that map a couple of numbers (real, complex, or others) to a number (except the case when divisor is 0). Their roles are almost symmetrical. Even though it seems that all physical entities have a monomial type dimensions like [K][L]^2/[T]^2 and [K][L]/[T]^2. This is the question I am asking why?

    • March 29, 2021 at 6:48 pm

      Yoshinori, Lockdown is easing here so I may ask a physicist. Meanwhile, I note that you say “physical quantity must have some invariant property that is conserved”. I would rather say that physicists seek ‘laws’ and ‘equations’, and hence invariants, and so that is what they find. But such findings are, as Lars suggests, conditional. In engineering and the science lab, these conditions are managed. Economists sometimes opine that these things will somehow look after themselves. But maybe they don’t always?

    • A.J. Sutter
      March 30, 2021 at 5:38 am

      @ Yoshinori Thanks, it’s an interesting question. There are a few ways of answering it; I hope I’ll hit on at least one that’s satisfactory. (NB: I was not a star in my abstract algebra class, but I’ll do my best.)

      1. There are indeed such additive expressions as ‘[L] + [L]’: the Taylor series expansion of a function of a dimensioned quantity is an example. This also works for compound dimensions, such as [M][L][T^{-1}] + [M][L][T^{-1}] .

      In this context, dimensions are more often expressed “monomially” for the same reason we usually use ‘5’ instead of ‘2+3’. I note though that this kind of expression can be used only when each term has the same dimension (as in your example). For the more general case of mixed dimensions, see ¶2 below.

      2. The four operations of arithmetic apply when we’re dealing with a field. In general, there are many algebraic structures that aren’t fields, and can’t use all four operations.

      There are currently 7 primary dimensions a/k/a/ base quantities accepted under international standards (this was modified in 2009, so it does change from time to time). They are the M (I think you meant this, instead of K?), L, and T you mention, plus I (electric current), Θ (absolute temperature), N (quantity of matter: this arises in such chemical concepts as molarity), and J (luminous intensity). There is also an identity dimension, equal to any of the other dimensions with exponent 0.

      Together, they don’t form a field, they form an abelian group only. In their usual representation, the group operation is multiplication — whence the monomials you speak of. Note that the dimensions per se form an Abelian group, even if the quantities to which they are attached aren’t commutative (e.g. commutators in quantum mechanics, and Poisson brackets in classical mechanics).

      One feature of this group is that the exponents on the elements are in principle unrestricted (although in practice they tend to be rational, and typically < 10 or even less, at least in expressions I'm familiar with). So let's say that their only constraint is that any exponent α is ∈ ℚ (the set of rational numbers). Then there is a different representation we could make, although I don't recall seeing it used in practice: namely, we can express any dimension as an ordered 7-tuple (α_1, α_2, … α_7) where each α_i ∈ ℚ. The dimensional representation of energy, e.g., then would be (1, 2, -2, 0, 0, 0, 0) if we take the ordering of the tuple as I recited above.

      This set of 7-tuples then forms an Abelian group under addition, and is isomorphic to the usual representation. It's not a vector space, though, because it lacks scalar multiplication and an inner product.

      The assumption that exponent α can be any elemnent of ℚ may not be realistic: it may be that α is restricted to a relatively small subset of ℚ, e.g. [-10, 10] or something even smaller, in observable physical laws. (Recall that higher-order terms in Taylor series expansions don't relate to the physical dimensions, which stay pretty low-order.) In that case, I'd suppose that some of the group properties (such as closure) may be lost in some sense.

      For example, consider Planck's formula for black-body spectral radiance as a function of wavelength: this has a factor of [L]^{-5}, which is not so typical. (This dimension is actually resolved to [L]^{-1} thanks to a factor of hc^2 in the formula, but let’s take it at face value for the sake of argument.)

      One could in theory multiply this quantity with itself to get [L]^{-10}, but maybe this doesn’t have any physical significance, maybe all observable physical laws have [L] with an exponent ∈ [-7, 7] (my arbitrary example). In that case treating the base quantities and their exponentiations as a group might be a convenient fiction, but not reflective of underlying reality. (I may be speculating out of my depth here.)

      3. Another aspect of your question is: why a group and not a field? I’d guess two possible answers might be (a) because that’s the way the universe was created, or (b) because that’s the way that humans categorize what they see in the universe. The second answer seems more plausible to me. This question, though, hinges on a topic in ontology known as “natural kinds,” which is definitely beyond my ken — though your question has gotten me curious about a recent edited volume on the subject: C. Kendig, ed. (2016) Natural Kinds and Classification in Scientific Practice (Routledge).

  26. March 30, 2021 at 1:31 pm

    Andy and Marsey,

    thank you for detailed response. I was surprised to know that there were so many arguments on dimensional analysis in economics journals. (This is the comment on Andy’s comment on March 29, 2021 at 10:41 am)

    Among three options, I prefer number 2. Yes, if we think of algebra of dimensions, it must be Abelian group. This adds a new question. Why the group of dimensions is commutative? I wonder if this has some deeper reasons in this fact. Do you know Alain Conne? I know him only by name and have no capability to understand his theory. He made a good contribution in clarifying the structure of C*-algebra. Inspired by this research he started to propose noncommutative geometry. It seems that he is claiming that standard space concept (commutative algebra) is not sufficient to formulate quokes and we should consider noncommutaive geometry. My wonder is this: Does the group of dimensions form non-Abelian group in the noncommutative geometry interpretation of physics.

    If you get any good ideas, please teach me.

    • March 30, 2021 at 3:05 pm

      You are asking a lot of good questions. But in standard British English, at least in my understanding, ‘teach’ seems an unfortunate term. I’m trying to convince you that you need to be very careful about how you interpret what anyone says, including those Lars quotes. Oh, and Marsay.

      https://en.wikipedia.org/wiki/Alain_Connes points to https://en.wikipedia.org/wiki/Connes_embedding_problem which says “Some results of von Neumann algebras theory can be obtained assuming positive solution to the problem. The problem is connected to some basic questions in quantum theory, which led to the realization that it also has important implications in computer science” and it points to https://www.nature.com/articles/d41586-020-00120-6 as providing a ‘negative’ answer.

      My caution would be to be just as careful in interpreting ‘negative’ answers as I have been suggesting you should interpret ‘positive ones’. Also, most people, like Lars, seem to think that science trumps maths, so this kind of think has no implication for the safety of nuclear fusion or internet shopping.

      This kind of thing was a hot topic in the 70s. The furthest people got then was that at least one of nuclear fusion and internet shopping was unsafe according to physics and computer science as then understood. But I’m assured by world class experts in both fields that their subjects have moved on, and such views are hopelessly out of date. This has suggested to me that Keynes et al are also misguided (as Lars seems to think) and I’m hopelessly out of my depth and have nothing useful to contribute to economics or much else ‘practical’.

      More recently (as cited by wikipedia) people have been moving beyond the old familiar maths, but noticed an unfortunate tendency to inadvertently fall back on old habits. I certainly struggle. So don’t take contemporary results too seriously yet, But I wouldn’t bet against them myself and have hopes ‘for my grandchildren’.

      Incidentally, my own work in this filed was summarised by someone who had worked with Turing (who had worked with both Keynes and von Neumann) as ‘Turing was right’. Maybe soon such a statement might be enlightening. But I fear not yet. Hold on, though: according to https://en.wikipedia.org/wiki/Alan_Turing#Legacy “His Turing test was a significant, characteristically provocative, and lasting contribution to the debate regarding artificial intelligence, which continues after more than half a century.” Economists, please note! Minds opening? Minds changing?

      • March 31, 2021 at 3:04 pm

        Dave,

        > “[T]each’ seems an unfortunate term”.

        I am sorry. I do not understand very well. Is this a stylistic problem? Or, the “teach” should be replaced by other more appropriate word?

        In my knowledge of English as foreign language, this is very difficult point to understand. Asking some others to “teach me” does not seem offensive, doesn’t it.

        Or, are you thinking about the possibility of Alain Conne’s program?

      • March 31, 2021 at 5:06 pm

        Yoshinori. ‘Teach’ in this context is in no way offensive, but I would rather say ‘let us engage, so that we may learn from each other’. This avoids any suggestion that the ‘teacher’ is to be taken as an ‘authority’. Mentor might be a better word, but it still suggests that I am superior to you. If so, why would I talk to you (from a neoclassical economic perspective)?

        With respect to your comment to Andrew, I’m happy to switch to another thread: Lars and rwer both supply us with ample excuse to keep the discussion going.

      • April 1, 2021 at 6:21 pm

        Agreed! Let us try.

      • April 2, 2021 at 12:48 pm

        I have commented at https://rwer.wordpress.com/2021/04/01/the-methods-economists-bring-to-their-research/ . Coincidentally, I have just come across:
        The ‘fundamental proposition in the analysis of propositions’: “Every proposition which we can understand must be composed wholly of constituents with which we are acquainted.” B. Russell, 1912.

        Unfortunately, in the preface it credits one J.M. Keynes ‘as regards probability and induction’, so I guess we could cite plenty of authority, backed up by our experience of ‘Keynesianism’, to the effect that we should discount such nonsense. Surely we’ve moved on in our understanding since then?

        (Actually, I hope we have, but this may be a good point to ‘regress’ to before we can hope to ‘progress’.)

        Interesting …

    • A.J. Sutter
      March 30, 2021 at 3:09 pm

      @Yoshinori The theory of Connes is way over my head. But I have a PDF of his book, and I know how to use search. As far as I can tell, he doesn’t discuss anything pertinent to base quantities.

      Chapter 1, section 1, entitled “Heisenberg and the Noncommutative Algebra of Physical Quantities,” reviews some standard material about classical and quantum mechanics, and in particular the Possion bracket and commutator I mentioned above. These are non-commutative algebraic entities — that is the inspiration for his book.

      But as far as I can tell that doesn’t carry over to the dimensional analysis: e.g., both the standard commutator and its negative have the same dimension. Here’s what that means:

      The standard QM commutator is:

      [q, p] = (qppq) = -iℏ, (*)

      where q is position (dimension [L]), p is momentum (dimension [MLT^{-1}]) and ℏ is the reduced Planck’s constant (the constant divided by 2π, with dimensions of energy・time, [ML^{2}T^{-1}). It’s easy to see that equation (*) is dimensionally homogeneous. Since the commutator is antisymmetric, we have

      [p, q] = (pqqp) = +iℏ,

      i.e. exactly the same dimensions, multiplied by a different scalar magnitude. So it doesn’t look like the group of physical dimensions is affected, but more the quantities to which they are attached. If I may make a naïve speculation, maybe this can be understood as follows: a novel geometry may cause your measurements to differ depending on what sequence you make them in, without changing the qualitative essence of what you’re measuring.

      Emphasis in the above discussion on “as far as I can tell.”

      • March 31, 2021 at 3:10 pm

        Dear Andy,

        thank you for the reply. Our comments became too long. The number of comments exceeds 90. It would be time to change the thread. The question on production function and TFP was not discussed. Let us try to find some good occasions.

      • March 31, 2021 at 5:26 pm

        Looking at the above and A J Sutter on March 30, 2021 at 5:38 am, I am seeing the problem of the non-commutative lying in a rational and not a complex number understanding of T. The circular unit derived from Pythagoras’s Theorem corresponds to the clock time on our global surface, wherein ten to three is not the same as ten past three, since it involves h (motion in the opposite direction).

        “There is also an identity dimension, equal to any of the other dimensions with exponent 0”.

        This surely is the Bit of mathematical information theory. One can add as well as multiply these Bits, if not the information they are bits of. But perhaps that’s too obvious for the Humean scientific parliament to see.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.