## It’s the math again

from: **David Ruccio**

Every time there’s a controversy in economics, the problem of mathematics seems to be at the center of the discussion. That’s because, in economics, discussing the role of mathematics is inextricably related to issues of science, epistemology, and methodology, which are themselves rarely discussed but are implicit in pretty much all of these discussions.

In short, we’re never very far from physics-envy.

That’s certainly the starting point of Noah Smith, who compares the mathematics of physics (the supposed “language of nature”) with that of macroeconomics (in which it serves to “signal intelligence”). And then, of course, Paul Krugman rides to the rescue, arguing that mathematical models, when “used properly,. . .help you think clearly, in a way that unaided words can’t.”

Uh, OK. “Used properly” is the operative clause there. The real question is, what *is* the proper use of mathematics in economics? And, what is the proper way of thinking about the proper use of mathematics? That’s where all the issues of science, epistemology, and methodology come to the fore.

As it turns out, one of the first articles I ever published, “The Merchant of Venice, or Marxism in the Mathematical Mode,” was on that very subject. I had mostly ignored mathematics during my undergraduate years but then, in graduate school and especially when I began to conduct the research for my dissertation (on mathematical planning models), I realized I wanted both to learn the econmath and to learn how to think about the econmath. Ironically, I ended up teaching “Mathematics for Economists” to first-year doctoral students for over a decade (it was basically a course in linear algebra and multivariate calculus, in which students also had to write a paper on the history and/or methodology of the mathematization of economics).

The argument I made in my dissertation and later in the “Merchant of Venice” article was that economists (mainstream economists especially, but also not a small number of heterodox economists, including Marxists) treated mathematics as a special language or code. They considered it special either in the sense that it was the language of nature (and therefore overprivileged) or a neutral medium for thinking and expressing ideas (and therefore underprivileged). Either way, it was considered special.

My alternative view was that mathematics was just one language among many, and therefore one set of metaphors among many. And like all metaphors, it served at one and the same time to enable and disable particular kinds of ideas. Therefore, we need to both write mathematical models and to erase them in order to produce new ideas.*

But that’s not how most economists think about mathematical models. And when they do think and write about them, they tend to invoke one or another argument for mathematics as a special code. They also tend to forget about all the other uses of mathematics in economics—not only as a signalling device but as a hammer to bludgeon all other approaches out of existence.

It’s the tool that is often used, in economics, to separate science from non-science—which, of course, if you say it quickly, becomes nonsense.

*That argument, concerning the not-so-special status of mathematics, so incensed one of my colleagues he attempted to derail my tenure case. Fortunately, another of my colleagues forced him to back down and I ended up receiving a unanimous recommendation.

“My alternative view was that mathematics was just one language among many, and therefore one set of metaphors among many. And like all metaphors, it served at one and the same time to enable and disable particular kinds of ideas.”

Just to clarify: How close is your view to D. McCloskey’ ‘ rhetoric of economics’ approach?

It’s close—but close only counts in horseshoes.

Deirdre and I arrive at the rhetoric of mathematics in economics from different very directions: she from American pragmatism and rhetoric, me from a more French tradition of poststructuralism (including the work of Althusser, Foucault, Bachelard, and Badiou). Probably our main point of disagreement is on the question of power. I see it (pervasively, across the discipline of economics, especially in the use of mathematics); she doesn’t.

I hope that helps.

I see. Thanks. This is a difference indeed.

The use of mathematics in economics is rarely scientific. I’m siding with those who say the mathematics used in economics is mostly heuristic, physically detached and mainly used for posturing or dominating encroaching non-specialists.

There is a second underlying issue here with Noah Smith got closest to. That is, the computational engine used in all mathematical modelling is based on the use of the natural structure of numbers. But we do not invent that structure. That structure is collected as data from the mathematical field then used as the building blocks for your equation. Thus the building blocks of mathematics, numbers, are not “one language of many”.

Every equation you’ve ever computed numerically has sourced from the natural structure of numbers. There is no alternative.

note me confused as to why the 25 year old economics opinion piece referenced is behind a paywall?

I do not think mathematical language is a necessary or sufficient condition for neoclassical or mainstream economics. Nor is non-mathematical language a necessary or sufficient condition for heterodox economics (including Marxist economics and the economics of Marx). Moreover, you cannot talk about the real economy (e.g., discuss real wages) without, at a minimum, arithmetic.

Mathematics has nothing to do with science, even though physics uses a lot of mathematics to develop its theory through symbolic logic. Economists use mathematics to imitate the appearance of physics to pretend they are doing science, which economists have little clue of what’s all about. (strange but true.)

The confusion comes from the fact that physics, which is a pinnacle of science, uses a lot of mathematics. In a superficial and misunderstood imitation of physics, economists feel that they must use a lot of mathematics, if they were to claim economics as a science. It merely shows economists (including Samuelson) really understood neither mathematics nor science, even though they use a lot of mathematics.

Economics is not a science, even though it has mathematical theories and plenty of empirical analysis of economic data. Economists are faking doing science, hoping eventually it will make it a science: fake it until they make it.

It was always so, ab ovo, but we begin to learn it seriously only now. Any use of math that involves considering the economy, national or global, as an isolated system, and attempts constructing an axiomatic model for it, is a pure and unmitigated baloney.

Moreover: IT IS DEADLY DANGEROUS. Environmental and social externalities and uncertainty are incommensurably more important than penny-wise efficiency of the market. Just today I saw in Foreign Affairs an article “The Day the Earth Ran Out.” In 34 weeks, humanity’s demand for natural resources exceeded the planet’s capability to renew them. Not to speak about billions of tons of chemicals released annually into air and oceans, etc.

When I look at your exited (even hetero) discussions of How many axioms, or How many angels can dance of the nose of Adam, I recall with regret — you know, it is sometimes pleasant to recall your younger years – the 1970s, when I’ve come to this country with great respect for for economics in general and Anglo-American economics in particular, for Nobels, etc. And for hundreds of millions of Americans who pay your counterproductive salaries.

Have a pleasant day.

Money is created as one debt by borrowers and then lent again as existing money by savers. The result is that P of debt > P of money to pay it with. This creates multiple debts dependent on the same created principal, which makes the system dependent on ever increasing money/debt creation by banks.

Therefore, the whole system falls apart in mass defaults any time borrowing from banks (money creation) slows down, which it must do periodically as all things in this world work in cycles.

Here is my WEA published paper

http://peemconference2013.worldeconomicsassociation.org/?paper=proposed-new-metric-the-perpetual-debt-level

Here is my full proof in graphical form. http://paulgrignon.netfirms.com/MoneyasDebt/twicelentanimated.html

This is barely even “mathematics”. It is grade school arithmetic applied to the REALITY of the banking system, the one thing economists seem most determined to IGNORE.

Your post complements Bryan Caplain’s post “Economath Fails the Cost-Benefit Test” which can be found here: http://econlog.econlib.org/archives/2013/08/economath_fails.html

Below is my response to his post:

“The idea that somehow the logic of mathematics and the logic of spoken debate are incompatible is fallacious. They are equivalent. A logically sound mathematical formulation will have an equivalently logically sound spoken statement and it will be supported by empirical evidence. This is commutative.

Our only hope for describing the world around us is through observing our surroundings. Much of the legerdemain we see in economath is a result of restricting the information with which the models are compared. This restricts the range of validity of the model while it is presented as having wider applicability. Call this the fatal conceit of the pretense of knowledge.

We need both forms of reasoning, however, we need to understand the limitations of both and ensure that we are not misapplying them either, the observations of Das Kapital come to mind. As examples of sound logic, we have Mises’ observations in Socialism and von Neumann and Morgenstern in Theory of Games and Economic Behavior.

Because something is hard and difficult to understand does not mean that we should avoid it. Take it as a challenge to expand your understanding. If you don’t train yourself in rigorous mathematics, it makes it impossible to understand when someone is blowing smoke.”

I agree with you on the need for science based economics. There is no need to physics envy. The logic that applies to statistical mechanics (physics and thermodynamics) also applies to economics. Logic is after all logic.

I am in the process of working through the mathematics of aggregating individuals acting under game theory. I found that the consequent aggregation results in a functional version of thermodynamics. Here is my blog http://statisticaleconomics.org/testing-and-development/pre_alpha/elementary-principles-in-statistical-economics/

Wayne Saslow, a physicist worked through many of the analogies in this paper: http://users.df.uba.ar/dasso/fis2byg_2009/saslow99_am_j_phys_analogia_economic_thermo.pdf

John Leinhard, another physicist, adopted a statistical mechanical framework here and found that income distributions are distributed canonically in this paper. http://www.uh.edu/engines/StatMechMacro.pdf

There is fertile soil to be tilled here as the mathematical structure is very powerful.

The logic that applies to statistical mechanics (physics and thermodynamics) also applies to economics. Logic is after all logic”.

This simply isn’t true, and is at the heart of what makes economic theory both basically wrong yet so abstruse one can hide anything in it. Maxwellian statistical mechanics assumes atoms are free to move in all directions, so direction is irrelevant, which when confined to tube or road (more generally a “channel”) is manifestly not true. Oliver Heaviside, a pioneer in mathematical methods, electrical thoery and long distance telephony was able to drastically simplify Maxwell’s equations and in the case of electric circuits reduce them to algebraic equations with usually negligible statistical uncertainty in the relationships: froth on the beer rather than condensate in the still.

Here’s a snippet from Pascoe’s “Teach Yourself New Mathematics” on VECTORS, which in economics are understood to be a row of values equivalent to the points one plots in order to draw a static graph.

“In our everyday life, although we may give little thought to the matter, we deal with two mathematical entities, namely quantities which have a definite magnitude but for which direction (say, of movement) has no meaning, and quantities for which direction is of fundamental significance.

It is not difficult to distinguish between these concepts. In physics, for example, lenght, mass and speed have magnitude but no direction. Such quantities are defined as SCALARS. Other physical units, such as force, momentum and velocity depend very much in the direction in which they act. They are defined as VECTORS”.

The illogic that applied the magic of a statistical mechanics to economics did so by reducing all its commodities to one, i.e. money or its equivalent. If that is all there is there are no channels directing it to other economic entities, so all its values are scalar and the theory suggests that he who holds the purse need not distribute the contents.

The initial derivation of statistical mechanics (Gibbs) and the Boltzmann equation both rely on a simplification, that particles do not interact with each other. This assumption is only valid for low density gases well above absolute zero. However, it does provide some insight and a framework for analyzing the much more common, complex, and relevant problems. When beginning the process of learning, you have to start somewhere. This is why we start with a simple model that begins to hint at the underlying complexity and provides a reasonable explanation. Don’t worry about second, third, …, nth order effects to early. Introduction of those things destroys parsimony within the model (Zellner el al.).

There is a progression in physics that I do not think that you appreciate. We start by hypothesizing a structure and then test the hypothesis. In a informal way we conduct Bayesian hypothesis testing comparing the relative information entropy of our models explanation to the real world observations. We look into the noise and try to understand why it is acting the way that it is. We keep on iterating this process over and over again, and this is how we develop our theory and understanding of the world around us. In actuality the models that you are so envious of, vary from poor to bad in explaining things on the microscopic level. The results we get from experiment are noisy especially as we increase our measurement sensitivity. But this also creates more fertile ground for discovery and understanding.

You are quite right that on the microscopic scale the action is based on vectors. That is true in physics and it is also true in microeconomics. However, the quantities that we obtain for exogenous (in physics these are called extensive) parameters are statistical averages of scalars. We don’t presume knowledge of the intensive (endogenous) vectors, just that they exist. We resolve this through the calculus of variations, which is not used very much in economics. The Euler-Lagrange equation is derived from a variational method. I had a graduate course in micro econ and to describe the treatment of the Euler-Lagrange equation is like comparison with teaching your kids to add and subtract to abstract algebra. They are related, but there is an underlying richness that is not conveyed.

You can add as many variables as you want, aggregate in which ever way that you want. The problem is that the functional equations rapidly devolve and become entirely intractable to compute loosing much of their utility. What statistical mechanics does is to formally lots of something small and examine their aggregate behavior. This is why it is so useful. By knowledge of a few parameters we can make a fair guess as to how the system will behave. Fitting models for an equation of state is awful. The plots of the data are horrendous and we just make the best guess that we can. In my experimenting with adapting these computational methods to economics, I found there is a lot less noise in the econ systems comparatively.

The reason why our models are so bad is in the evaluation of the integrals. We deal with expectations and to create those, we have to go through herculean efforts to approximate the actual integrals, which are formally impossible to actually evaluate. So we approximate. This is why our models in engineering have clearly defined ranges of applicability. Our approximations are very limited so you have to be careful which ones you use and for what. Use something improperly and you can kill someone. The more I study economics the more I see economists not understanding this fundamental point. Hayek rightly called it a fatal conceit.

Perhaps you need to understand I am not a professional economist, but was trained by low temperature and semiconductor physicists, then studied maths, statistics and computing before requalifying in information science. I am not envious of YOUR physical models, for like Heaviside I have developed much simpler ones of my own, and in discussion, like Einstein I like to keep things simple (but not – like Gibbs missing the point of quaternions, or economists interpreting Bayes one-dimensionally – too simple). “There is a progression in [circuit] physics that I do not think that you appreciate”, despite my having indicated it in my second para and its ubiquitously useful results. Did you think me simplistic because I tried to stay simple? I don’t like the thought of that, nor do I like Hayek; but in the end you are right (and I apologise for the harshness this has induced in my critique). In economics, “use maths improperly and you can kill lots of people”.

Quaternions, very interesting stuff. To be honest this was the first time I have come across them. I am a nuclear engineer and work a lot with the Boltzmann transport equation. It seems the quaternions are the direction of an orthogonal set of the directions of the unit vector (i,j,k) with the remaining orthogonal component, time. I can see how this would be useful in relativistic studies. The dialations would be linear operations on the quaternion vector. My current research is in control theory so this will prove most useful. Thank you.

I wouldn’t describe Gibbs as too simple. At the time of the development of his theories, we did not have information theory. His simplification to describe equilibrium states was unreal, he maximized the uncertainty of the distribution of Hamiltonians to make it time invariant. Jaynes identified this as the principle of maximum entropy in the 1950’s after Claude Shannon developed information theory. I do not see how Gibbs approach would lead to a resultant loss of information. Gibbs was explicit about formally defining his ignorance.

My critique of your comment is centered on the idea of loss of knowledge of the components. In the derivation, the individual components are defined in a vector field. Because there is no perceived direction they are taken as isotropic (a maximum entropy condition) You would have to assume a set of information (reducing the entropy) in order to bring the direction of motion into relevance on a macroscopic scale. We have this problem when we talk about plasma fields (nuclear fusion) it makes the math much more complex. Thus without qualified macroscopic information of an applied vector field, our macroscopic variables are all scalar quantities. Want to kill the vector? Integrate over all solid angles, position, and velocity, and make it time invariant.

In my application of statistical mechanics to economics, I used Game Theory’s axioms and followed Gibbs’ derivation. In the process of aggregation, information of the microstate is lost, and information on the macrostate is gained. The information that is gained is entropy, an extensive scalar quantity, and macroscopic structure/properties that are not apparent just form the microstate. I found no representation of the Euler equation that contained entropy in any form in any mainstream macro paper. Just about every economist performed the aggregation incorrectly and we have erroneous results. Wonder why there is so much contention over the mathematics, its because they have not identified the proper set of functional constraints.

Theoretical economics as it stands today is not bound be the second law. The only school of economics that discusses spontaneous order as a fundamental process are the Austrians. We know that what drives spontaneous processes in the real world is a positive entropy gradient. Thus, the only school that gives serious treatment to the second law are the Austrians. Like Hayek or not that is irrelevant. You cannot avoid the second law.

“If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations—then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation—well these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” -Sir Arthur Eddington

that is a kick -ss (or to be pc) excellent post.

hadnt ever heard of that ’71 paper. yakovenko (with a review by j b rosser) makes similar arguments (on arxiv.org) though champerowne and others made some even back in the 1940’s. (the stat mech analogy does have some possibly fatal flaws which yakovenko has not really discussed but its an analogy).

saslow had an old paper in AmJPhys way back tho his formulation was disputed. (Duncan Foley also had one in JEcTh in the 90’s and some others).

My own view is alot of this can be derived even from de Broglie’s (of quantum theory) last book or Verlinde’s entropic gravity; you get arrow-hahn-debreu, SMD, keen and chaos, tho macroeconomics requires alot of aggregation to define new variables (eg r goodwin). you have to add bounded rationality–h simon, and maybe behavioral economics which i find a bit co-opted.

p.s. that is the same saslow paper i already have. one can go back to Am Math Monthly around 1945 for similar stuff; the best part of that old paper is duscussion of slutsky-yule theorem. its data mining, possibly all in your mind (bishop berkeley, roundoff errors, moving averages, arbitrary cutoffs etc.)

Thank you for the feedback. I downloaded the Yakovenko paper and gave it a quick glance. I was unable to find the Rosser review that you mentioned.

What are the pitfals that you see needing to be addressed? As an amateur economist, I am quite blind to where there is difficulty integrating this approach with current economic theory.

Wayne Saslow had another paper? What was the dispute? Saslow’s 1999 paper made sense to me, so if there was a dispute of it I am not aware of where it would originate. Also I did find a paper done by two economists (Smith and Foley) in the 2005 (Classical thermodynmaics and economic general equilibrium theory) where they used Saslow’s paper but failed to apply entropy properly and thus had some difficulty in the paper.

Jaynes had a 1991 paper (The second law as physical fact and as human inference) that was very informative, his work on developing MAXENT was incredible.

Thank you again!

When using mathematical models neoclassical economists choose variables of their subjective choice.Most of them are fictional. They ascribe to those variables subjective qualities and behavior. In their “objective” models government always enter as “bad guy”. Then they manipulate with those variables, calling this manipulation a process of scientific discovery.

That is called scientificism.

August 23, 2013

“Every time there’s a controversy in economics, the problem of mathematics seems to be at the center of the discussion”. Nonsense !

Mathematics as a “problem” ? Pun intended ?

Interested parties should consult the great philosopher Henri Bergson [a math prodigy in his youth] as to what constitutes a “false problem”. From a Bergsonian perspective the “problem of mathematics” in economic thought and analysis would qualify as a ‘red herring’. Tossed out to masquerade the uncomfortable stigma that much of the subject [by no means all] is now “between paradigms”. And that there is now a challenging paradigm,that does not so carelessly dismiss the great economic thinkers of the past [Marx definitely not included there]: Nor the many serious thinkers who tried to apply mathematical analysis to economics.That most of these attempts at Mathematical Economics did not always hit the mark [although some did], is no reason to “wipe the slate clean” and “start all over again”. The approach of the great Alfred Marshall and John Maynard Keynes as to how mathematical exposition should be integrated into economics, deserves respect. Both had lots to say on that subject. M. Paul Davidson knows lots more than I do about what those two authentic giants had to say about it. Neither of them were amateurs when it came to applying mathematical analysis [especially geometry] to economic reasoning. Or, better still, read page 80 of Telos & Technos,195 page edition:

Quoting the great mathematician & “all rounder” [pun not intended],Norbert Wiener, who shared the same premises not far from Paul A. Samuelson at MIT:

“The success of mathematical physics led the social scientists to be jealous of its power; Without quite understanding the intellectual attitudes that had contributed to that power”…”The mathematical physics of 1850 became the mode of the social sciences”…”Very few econometricians are aware that if they are to ‘IMITATE’ the procedure of modern physics, and not its ‘MERE APPEARANCES’ they must begin with a critical account of THEIR quantitative notions and the means adopted for collecting and measuring them”.[GOD & GOLEM, 1964].

As for applying mathematical analysis to the “economics” [???] of Marx” and its almost farcical hyper-linear conspiratorial determinism; Interested readers should consult the late,highly respected Michio Morishima’s “MARX’S ECONOMICS: A DUAL THEORY of VALUE and GROWTH.” Morishima [1923-2005] was a very competent mathematician and economist. Who was not at all dismissive of Marx’s efforts. But he just couldn’t launder Marx’s stuff into “Real-World” relevance. I wonder how Michio Morishima would have put his conclusions into Japanese ?[Perhaps in a Haiku]: “You can’t make a silk purse out of a sow’s ear” ? Thank you for your patience. Norman L. Roth, Toronto, Canada. Please GOOGLE” [1] Norman Roth, Technos [2] Norman Roth, Origins of Markets [3] Telos & Technos, Roth

Crabel #13

If your current interest is in control theory then quaternions are indeed relevant, but you seem to be misconstruing them. Quaternions are not sets but operations on two-dimensional continuous spaces, which rotate them (and all their contents, e.g. your set of directions) into orthogonal planes. Reduce the two-dimensional spaces to one-dimensional lines and you have the complex number plane, where performing operator i four times brings you back where you started, but likewise, so does -i, i.e. the direction of motion can be in either direction. As successive application of the quaternion ijk operators leave their plane upside down, their conjoint (or repeated) application is needed to get them back again. If arithmetic and the algebra of variables represent operations on the contents of a plane, geometry and topology represent actions directly on the plane (and all and therefore any of its contents) and this is what differentiation (and conversely, integration) ultimately amounts to. Granted the expanding spherical universe generated by a Big Bang, and the coordinate system which has enabled us to map our spherical earth, this is a completeness theorem, i.e. it accounts for everything.

Right angles directionally represent complete differences, and physically the differential of an electrical sine wave is a magnetic cosine wave orthogonal to it. Engineers and architects represent three-dimensional objects by means of three two-dimensional drawings (c.f. the differential of x cubed); given the apparatus, they are free to draw whatever object they like on it. For that reason (and we are coming now to control theory), I use a term I learned from statistics, “degrees of freedom”, to describe what quaternions represent, the fourth one being freedom to correct whatever is going wrong. Conversely, if the fourth represents an objective to be controlled, the other three represent different types of corrective. With 3-dimensional space abstracted to leave only time, such correctives can only represent errors from the past, present and future. I leave you to so analyse the driving of a car, and to look up for yourself the PID servos which now automate the error correction in electronic control systems.

Let me come back to Gibb’s method. First of all it was pre-relativity, assuming infinite space and Cartesian coordinates with no provision for error correction. Secondly, it applied differentiation by orthogonality not to space but to types of object, with no completeness theorem to ensure all the relevant types were orthogonal and accounted for. It is in this sense it is too simple. It has not merely lost the essential “macro” information of boundedness, but as theory, not drawn attention to the reality of it. Thirdly, it has lost the point of Shannon’s information theory, which starts from the understanding that the purpose of communicating information is, as in indexing or classifying a book, to direct attention to where meaning can be found. The geometrical coordination is simple and unambiguous, whereas Gibb’s method doesn’t even address reality in all its rich subtlety. Interesting analogies here include the Hypergeometric function which can mimic most others, and S R Ranganathan’s PMEST classification system for libraries.

I did not find your arguments simple, I found them refreshing. What I found simple were the tools used in modern economics. I am sorry that I did not make that clearer earlier (I am rereading our comment thread).

First, any axiomiztion is unable to prove its own consistency, Gödel’s second incompleteness theorem. Is there a proof showing quaternions as a contradiction to Gödel? I’m interested to see it.

Orthogonality, represents logical independence. This is a key point in using the simplification of separability, which allowed Gibbs to derive Maxwell’s distribution. The physical interpretation is that the system must be at a low density, such that the particles do not interact. This is patently untrue for any physical system. Yes he did use it in the process of differentiation to obtain the Hamiltonian. However, the more significant use of orthogonality was in the process of integration during aggregation. The interaction of particles or even tracking the relative motion between any more than just two particles in a relativistic framework is impossible in a closed form and very difficult using computer simulation.

True Gibbs uses Euclidean space. John von Neumann used Hilbert manifolds and was able to show that allowed quantization of the states. (energy is not a differentiable function). Which when applied to Gibbs framework was seamlessly incorporated. One could, I imagine even apply Gibbs to Riemannian manifolds and incorporate relativistic effects. This system would not be separable and form the standpoint of a generic coordinate system referencing to a lab reference frame would be most cumbersome. It is for this reason that even in plasma physics where we can just treat things relativistically, we still treat the particles classically. The math is almost impossible to do as it is, making it more difficult would not give any insight. We accept the error because we cannot do any better.

The macro information lost due to the selection of an infinite boundary condition that you decry has limited informational loss in just about every practical problem. This is because the scale of the systems >> the diffusion length of the particles in the media. Because of this, there are very few geometric effects. This is a useful approximation to gain understanding of the system. Only when we start getting into microscopic structures, does this approximation start to introduce an appreciable error. It is at this point that we have to rely on numerical solutions to predict actual behavior with a satisfactory accuracy.

Any such needed refinement in the model requires the introduction of additional information. This greatly increases the complexity of the mathematical model. You have to add information in order to reduce the entropy (Bayesian hypothesis testing). This entropy reduction reveals a much richer and greater complexity that the microstate holds. This comes at a cost of adding complexity to the model. How accurate do you need it, is often superseded by how accurate can you afford to compute it.

This is why using fundamental and very clear statements, without too much formality, to convey deeper understanding.

You may want to look into Jaynes’ work on information theory. He did a significant amount of the generalization of that theory into other fields. It is not just about the informational content in words.

Wonderful discussion. There is another (new ?) phenomenon on the use of economic maths I’ve seen and regarding it I would be interested in hearing of similar experiences or comments of where it might come from.

About 3 years ago I attended a small bookstore gathering of left/liberals come to hear an interview of a then medium status local opposition (Australian) politician had what insight he might have to offer. Then just prior to the event, said person grabbed the roll of Australian leader of the opposition and is now scheduled to become our next PM. This person, one Tony Abbot, was asked by the interviewer what books he read.

Among the limited offerings he explained that two of his favourites were the New Testament and Adam Smith’s Wealth of Nations. When commenting on the latter he had nothing to say about economic mathematics and principles but rather he was extremely impressed by the INVISIBLE HAND said in tones suggesting he had had some sort of Damascus moment while reading it.

The significance of this escaped me until recently a few months back when another local politician from the same party was brought into another green/left liberal workshop about future sustainable food supplies in the name of a balanced discussion. This politician is quite well known as also being a member of the local religious right.

Though he didn’t contribute constructively directly I was fascinated as he elaborated further on this theme of Deus Ex (Economic) Machina, more or less as follows:

– Economic LAWS are as fixed and as immutable as Newtonian gravity

– In fact their discovery and elaboration by (Nobel memorial Swedish central bank prize winning) (neoclassical) economists demonstrates this is so.

– The solution to everything is the market and you just need to let it be.

– And all will be right in this best of worlds (Had he heard of Candide?) because of the Pareto optimum.

– And this is true because its all scientific.

– (implied) none of you lefties understand these deep mystical truths because you don’t understand economics, all your concerns are as nothing, and you don’t need to be worried because all will be well.

Fascinatingly not one or the other 7 panellists picked up and took him to task on a single one of his points – which disappointingly suggested indeed regarding economics they were pretty ignorant and disinterested.

Anyway – has anyone got further thoughts on the god = physics = economics equation and what appears to be an organised effort to develop a weird religious/economics scientism on us?

I certainly found your ability to understand what I am talking about refreshing! On Gödel’s incompleteness theorem, I think it is about self-consistency, whereas Shannon’s definition of information is about languages pointing to meaning and non-symbolic objects have “iconic” differences referring to their own meaning, e.g. cars having wheels indicating their mobility. I say that because Gödel is thinking of language as a set of symbols referring to each other, whereas Shannon is thinking of it as an element in a communication process, which iconically must have a beginning, a middle and an end, and insofar as it is continuous, to be “circular” and still requiring three points for its definition (with a mirror-image triangle in the other half of the circle). Thus the energy of the Big Bang must become circular to become localised in material particles (spray); but that still leaves a good bit being communicated as undifferentiable energy. [Coherent light at frequency zero]? The reality to which language refers thus consists of ‘things’ which amount to closed processes, processes which can be open but on different timescales may be temporarily closed, ‘thing’ references to things and either static (written) or dynamic references to processes (e.g. audio differentiated by reference to frequency, i.e. timescales). But in this, timescales are neither one thing not the other, and at the Big Bang there was nothing to differentiate one direction from another; hence the need to coordinate them by reference to the logical construct of time, allowing differentiation of the timescale of waves formed at the one differentiating feature at the Big Bang, the expanding energy reaching the limit of the space it has created. Thus (in statistical terminology) the two degrees of freedom of the objects of quaternions require the addition of another term: logical constructs (time and coordinates), here expressed dynamically by their inter-relations. Put simply, the experimental proof of the pudding is in the move from a Gödelesque programming language Algol60 to a two-level Algol68 (http://www.fh-jena.de/~kleine/history/languages/Algol68R-UserGuide.pdf ) in which references to procedures is made statically explicit and transformed into processes by computer switching circuit logic. It seems to me Gödel’s problem can also be resolved by transforming it into a problem of choice of interpretation, which at root boils down to whether the Big Bang is a reference to itself or a reference to a process of evolution programmed by a God.

I’ve got some urgent work to do today so I’d better leave you to digest that. All the best.

Continuing crabel2013 @ #17

On orthogonality, yes, it represents logical independence, but until it had produced waves producing “spray” on encountering a “beach”, the universe didn’t have anything to be independent, and even waves of continuous energy are not independent but systematically related, although Poynting’s Vector and the quaternion group of stable electron, proton, neutron and atomic particles corresponds to upward, sideways, forward and sideways directions of collapse of the wave front into “spray”, and atoms to the recombination of spray into particles with internal structure: a pattern which repeats itself at later and later stages of evolution much like an algorithmic (arabic) number reuses the same numerals at higher and higher levels of significance.

Shannon’s view of language similarly uses more and more words to significantly narrow the location when one can perceive the meaning intended to be communicated. Hence the point of what I was trying to say about loss of information: the loss of information about one [type of] thing is not nearly so significant as loss of the most significant digit or the bit of information indicating the volume the page index is referring to. Shannon, however, used excess (redundant) information capacity to combine this indexing information with error indication (parity) bits, error correction codes and the synchronised parallel processing of message and corrective negative feedback as “words”.

At your reference to accuracy, “an engineer can make for a penny what any fool can make for a pound”. It is wonderful to be able to dock with orbiting spacecraft, but for everyday life all that needs to be known is that the moon will stay in its orbit because we have advance notice of any possibility (like a huge meteorite collision) that it won’t, and a strategy for dealing with it before it does. That can be informally and very clearly stated in geometric diagrams without need to go into algebraic equations and arithmetic.

At your reference to Bayes, the crucial bit of information missing in the Humean interpretation is the orthogonality of the second source to the first, i.e. two independent ways of estimating a probability (e.g. the symmetries of a die and sampling the results of throwing it) have been reduced (and it seems not just by monetary “economists”) to effectively a formula for combining independent samples of the same population.

All this isn’t spelled out in books but came from reflection on my work, the tools I was using and the history of their development, to a surprising extent illuminating and being illuminated by the Biblical stories and by writer G K Chesterton’s prophetic insights into personality differences, brain architecture and the indexical nature of language, these coming together in his “small is beautiful” localized economics. The nearest I have come to finding anyone else recombining what he has learnt in this fashion has been Arthur M Young, a very successful helicopter engineer, in his “The Geometry of Meaning”, 1976, so more or less contemporary with me. This is commented on by John Saloma in Joan L Schliecher, ed: “The Theory of Process 1.

In response to Newtownian at #18, the “organized effort to develop a weird religious/economic scientism on us” goes back to the Protestant “Reformation”, Humean “Enlightenment” and the “Whig version of history”, so not surprisingly, John Ruskin’s trying to publish “Unto This Last” and Chesterton’s becoming a Catholic rapidly had them rebranded from Celebrities to Nut Cases. Chesterton’s economic thought (now ridiculed as “three acres and a cow”, which was his own joke about it) survives not only via the religiously discrete Schumacher, but with his “local democratic government” misrepresented by Hayek and Thatcher’s mob as support for “minimal government”.

Correction to complete references in last sentence of penultimate para, mislaid during editing:

“This is commented on by John Saloma in Joan L Schliecher, ed: “The Theory of Process 1. (Prelude: Search of a Paradigm)” and 2 (Major Themes in ‘The Reflexive Universe’), 1991, all published by Robert Briggs Associates, in California”.

With apologies.

How about the second invisible hand of Scitovsky? Uncertainty is the bucket of cold water much needed in this so sophisticated discussion.

Sorry, elaboration needed. Uncertainty includes unknowns in societal and environmental externalities that are incomparably more important than whatever happens within the market.

Sept. 02,2013,

See [1] Paul Davidson on “MATHEMATICAL MODELLING IN ECONOMICS”, Sept. 02, 2013

And [2] Norman L. Roth, August 24, 2013, #18, above, especially lines 17& 18 from the top.

Bravo Paul Davidson. Although I strongly suspect that most rational commentary to date, such as yours on the role of mathematical exposition in Economics, [See second last paragraph of #18 above from the great Norbert Wiener half a century ago], will, in many cases, “go in one ear and out the other” with the speed….of sound. Interested parties should also look into Clive W. Granger’s valiant, but by no means inconclusive work on “Co-integration” and the limits of predictability through time series analysis in economics. Especially his clear understanding that authentic variables of causation [especially monetary manipulations] do not necessarily lead to the holy grail of ‘diktat’ control over a modern complex economy. Nor can the bloody-minded longing for servo-mechanism engendered stability, that still persists in so many seemingly unconnected quarters, lead anywhere except into the totalitarian abyss,where we have been so often in the recent past. The interested reader should also read pages 159 and 160 of the 195 page edition of TELOS & TECHNOS: ‘Say’s Law and the Monetarist Conceit’. And please read for yourselves what the great Keynes & Alfred Marshall said so clearly on this subject. Norman L. Roth, Toronto, Canada.

Thank you for your patience. Please GOOGLE {1} Technos, Norman Roth

[2] Origins of Markets, Norman Roth, [3] Telos & Technos, Roth

I downloaded the Yakovenko paper and gave it a quick glance. I was unable to find the Rosser review that you mentioned.

—-its j b rosser and yakovenko on arxiv.org. its not that different, though has a few tweaks which may possibly adress the issue i have with the paper and approach (a point that was also made by brian hayes in american scientist).

i dont really want to go through the details except i’ll say the ideal gas model assumes particles are identiical, live forever, and react randomly (ie essentially have perfect information—-ergodicity). one can fix some of these (eg by including savings as some indian authors do). However, if instead of an ideal gas one looks at an economy as something more like adam smith or say or marx did—- people who allocate time and energy to various things and exchange for various reasons with bounded rationality, then its possible ther statistical mechanics analogy is ‘not even wrong’, even if some gross features like income distribution it gets correct. there are very specific assumptions in the ‘entropy postualtes’ of boltzmann and gibbs, which is also why later people developed bose-einstein and fermi-dirac statistics. . . If you throw in nobnequilibrium, finite liefspan, and different statistics you end up with something possibly more correct but fairly intractible.

———————–

Wayne Saslow had another paper? What was the dispute? Saslow’s 1999 paper made sense to me, so if there was a dispute of it I am not aware of where it would originate.

—actusally the paper you mentioned is the one i have. so there’s just 1. the problem with saslow is his approach is fairly ‘ad hoc’. I foget the details but if i recall he identified utility with energy. Others identify utility with entropy. And then you can look at theoretical physics to see what is thought about entropy and energy going back to 1800’s, with big contributions by Louislie de Broglie of quantum theory up to Verlinde today (entropy gravity).

These arbitrary choices while ok as a start are like me arbitrarily deciding that a complete understanding of Moral Sceince or Sentiments starts with My Book, and any alternatuives like the Bible, Koran, Tao, Confucious, Hinduism, Bhuddism, Hilter, Marx, Rush Limbaugh, etc. are wrong and shouold be ignored.

————————-

Also I did find a paper done by two economists (Smith and Foley) in the 2005 (Classical thermodynmaics and economic general equilibrium theory) where they used Saslow’s paper but failed to apply entropy properly and thus had some difficulty in the paper.

Jaynes had a 1991 paper (The second law as physical fact and as human inference) that was very informative, his work on developing MAXENT was incredible.

————Eric Smith is a smart guy. Foley had a different paper in the 90’s applying stat mech to the economy—its better (in the tradition more of steven durlauf or eric follmer).

However foley is very much in ther standard obfuscationist tradition—rigor mortiis; plus he actually doesnt really get ahy useful or even interpretable results (he just gets a publication) . My view is his earlier paper whcih was around 25 pages could have been written in 3-5 pages. The Smith and Foley paper I did not like at all; its obfuscationism, but then actually i really am not too (if at all) ‘down’ with eiher much of economics (besudes basic ideas like how individuals allocate time and make choices for production and distribution) or thermodynamics (which as far as i am concerned should be mostly replaced with statistical mechanics .

ola

Thank you again!

@ishi

Thank you again for the responses and feedback. I now have the papers you mentioned.

I need to address some of your points informally.

1. Utility is not entropy. Saslow is right it is equivalent to internal energy. I used game theory to define the metric space of utility. When I aggregated this metric space, utility adopted the same functional form as internal energy. However, unlike other forms that use the Hamiltonian (modern Keynesians) there was another term that came out. This was . This term replaced the part of the Hamiltonian that described the action of the individuals. It describes the aggregation of individual action in a very praxeological sense. Utility is something altogether different. Fundamentally entropy is simply the expected index of probability as Gibbs describes it. In information theory, entropy is called the expected uncertainty.

2. Ergodicity is assuming a state of minimal knowledge of the system. Ergodic theory when applied is fundamentally a time invariant form of the principle of maximum entropy. It assumes the least amount of information. To not assume this requires the inclusion of additional information, which as you correctly identify significantly complicates the mathematics.

3. The ideal gas law is a power law function. It is as valuable as it is because it is heavy tailed and assumes very little about the actual structure. A Cobb Douglas production function is so useful for the same reason, which is why a C-D when applied in a thermodynamic application results in the ideal gas laws. In physics, we derive the ideal gas law by assuming that the particles do not interact with each other. Thus the distribution that we describe, is where the particle is allowed to explore the entire phase space allowed by the extensive parameters. This would be like a person exploring every possibility of wealth and location allowed by policy and endogenous resource constraints.

4. While this is not possible for particles or people, the point of indistinguishably means that each individual has no identifiable endogenous constraints that would prohibit them form being rich or poor. In this regard women and men would be achieve equal amounts, race, and religion would not be defining characteristics. This is more in line with Locke’s ideas of natural rights. Now, society does discriminate. However, this discrimination does not increase entropy, it decreases it, by prohibiting actions of individuals. When we assume indistinguishably, we assume classically liberal ideas about individuals. I find it a touch ironic that classical liberalism represents the greatest entropy that a society can achieve, and that any other social ordering reduces the entropy of the system–limiting its wealth and action.

5. Regarding assuming perfect information, that is never done. What is assumed is that individuals have knowledge of the overall social and exogenous constraints, and are restricted in there action in this regard. Because we have a distribution of individuals with different wealth and resource allocation, we can approximate this distribution as the sum of the possibilities that any individual can achieve based on the constraints.

BTW there is no micro or macro in physics. we get to the big by adding up the small. Different properties and laws fall out as a consequence of the aggregation that are not apparent in the small. Both are useful. I understand economics in the same manner.

comments—-

i looked again at saslow and smith/foley. Smith/foley actually have an equation similar to saslow (who also says on the 1st page that either energy can be taken as utility, or as (negative) wealth. he chooses utility). foley and smith have a powerpoint on line (on a dartmouth site) which discusses views on themrodynamic equivalences over the years—its a confused mess going back to walras, tinbergen (who studied under gibbs) , samuelson, etc.)

i dont find either saslow or foley/smith particularily useful or insightful. smith and foley are failry impenetrible—30 pages of prose, an ‘example’ that is about as convoluted as possible (like if i was trying to explain lagrangian mechanics and used as my example the lagrangian for the standard model, or the principle of least action and used general relativity), and they use 100’s of symbols which one has to wade through the prose to find the definitions. they have other papers, and in some do find boltzmann-gibbs solutions of various sorts, with temperature as price (whereas in yakovenko temperature is average income and in saslow its average level of development). possibly the foley/smith papers all agree with each other (or possibly they don’t) and all these authors agree. I dought i have time to wade through the morass to see. (Smith has excellent titles for his papers at Santa Fe institute, but my glance at them suggests many are similar to his one with foley stylistically so its hard to know if there is any ‘there’ there

.

In fact, as i think about economics, rather than generate one’s own ‘all new’ dialect or theory it would be better to synthesize what is out there, but people want their own religions in which they can be preachers who can get high salaries, worship, tenure, etc.

yakovenko’s model is intuituve—N people, and you throw money (energy) at them, and get a boltzmann distribution—assuming the individuals are identical but distinguishable, and that money ‘quanta’ (dollars) are not. One defines that max entropy as ‘economic equilibrium’ Things like cobbs-douglas utility maximization for all individuals (from which one can define a social welfare function as an equilibrium) don’t really enter, though they mention it. (With interactions like that, one has something of constrained optimization, like an ising model or spin glass, so you get multiple optima, and the system will appear nonergodic, though it can still be written in boltzmann-gibbs form).

To me, maximum entropy is ‘perfect knowledge’ as in the walras/arrow/hahn/lucas etc models of general equilibrium economics—-its as if individuals knew every possible state of the system when choosing their income/job, etc. The main reason Yakovenko type models fail to me is that in such ‘ideal gas’ types all individuals are the same over time—-so at any instant the system will be highy unequal but over time every individual will go through the entire income distribution. Maybe that is true—‘in the long run we’ll all be dead’.

game theory is a better —intuitive—way of thinking, to me. and its quite compatible with statistical physics it seems to me. but, highly complex—-eg NP complete.

also, while indistinguishability might seem like Lockian ‘ideal freedom’ (no discrimination, etc.) as noted if modeled as maximum entropy in statistical mechanics, this leads to inequality (boltzmann’s exponential or power law). I can think of income distributions which might be more ‘ideal freedomn’ such as uniform—all equal—which of course is of very low probability in stat mech. But such a distribution is conceivable as either the absolute zero temperature solution of maximum entropy, or via a more mechanical model of distribution rather than probabilistic . ie perfect forces balances at mechanical equilibrium. (Its appearing that as L de Broglie suggested in the 40’s these 2 seemingly incompatible definitions are actually equivalent (duals) as possibly electrons and black holes are—ie the micro is the macro and the reverse).