from Lars Syll
The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state. We do not have an invariable relation between particular bodies, but nevertheless each has on the others its own separate and invariable effect, which does not change with changing circumstances, although, of course, the total effect may be changed to almost any extent if all the other accompanying causes are different. Each atom can, according to this theory, be treated as a separate cause and does not enter into different organic combinations in each of which it is regulated by different laws …
The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts. Given, on the other hand, a number of legally atomic units and the laws connecting them, it would be possible to deduce their effects pro tanto without an exhaustive knowledge of all the coexisting circumstances.
Keynes’ incisive critique is of course of interest in general for all sciences, but I think it is also of special interest in economics as a background to much of Keynes’ doubts about inferential statistics and econometrics.
from Asad Zaman
Modern economic theory is founded on the principle that human beings “maximize utility”; that is, they choose the best action from among a collection of choices. This axiom is considered self-evident: why would anyone make an inferior choice, when a better option is available? However, the mathematical formulation of this axiom is far from realistic. After all, it is self-evident that human behavior cannot be described by mathematical laws. Critics have invented the term “homo economicus” to describe behavior governed by economic laws, which differs drastically from normal human behavior. We can describe homo economicus as cold, calculating, and callous. We explain each of these terms separately.
The theory of consumer behavior which is taught in business school differs drastically from the same theory taught in the economics school. The homo economicus of economists is cold – not subject to any emotional influences in his consumption decisions. In complete contrast, a fundamental axiom of consumer theory in the business school is that effective marketing appeals to emotions instead of reason. The proven effectiveness of business school methods in getting consumers to purchase a wide range of completely useless goods shows the superiority of their models of human behavior. read more
from Lars Syll
In 1938 Paul Samuelson offered a replacement for the then accepted theory of utility. The cardinal utility theory had been discarded a couple of years earlier, but according to Samuelson, the ordinalist revision of utility theory was not drastic enough. One ought to analyze the consumer’s behaviour without having recourse to the concept of utility at all, since this did not correspond to directly observable phenomena.
The new theory’s main feature was a consistency postulate which said ‘if an individual selects batch one over batch two, he does not at the same time select two over one.’ From this ‘perfectly clear’ postulate and the assumptions of given demand functions and that all income is spent, Samuelson could derive all the main results of ordinal utility theory (single-valuedness and homogeneity of degree zero of demand functions, and negative semi-definiteness of the substitution matrix).
In 1950 Hendrik Houthakker made an amendment to the theory assuring integrability, and by that the theory had according to Samuelson been ‘brought to a close.’ According to Houthakker, the aim of the revealed preference approach was ‘to formulate equivalent systems of axioms on preferences and on demand functions.’
Arna Vardardottir on Voxeu: “One of the many impacts of the Global Crisis was on stress levels, and these can be a risk factor for adverse birth outcomes. This column shows that exposure to the Crisis resulted in a significant reduction in the birth weight of babies in Iceland, comparable in size to the effect of smoking during pregnancy. The full costs of poor health at birth as a result of the Crisis will not materialise until the children exposed in utero become adults”
Mathieu Couttenier, Veronica Preotu and Mathias Thoenig on Voxeu: “The refugee crisis that erupted in 2015 has raised concerns about potential violence and criminality of the migrants. This column investigates whether past exposure to conflict makes asylum seekers in Switzerland more violent. The findings show that cohorts exposed to civil conflicts/mass killings during childhood are, on average, 40% more prone to violent crimes than their co-nationals born after the conflict. Certain policies can mitigate this result. In particular, offering labour market access to asylum seekers eliminates all the effect” Read more…
from Edward Fullbrook
Yesterday evening Merijn Knibbe put up this comment on Lars Syll’s post Utility maximization — explaining everything and nothing.
One of the features of the utility model is that it ´explains´ the downward sloping demand curve, a cornerstone of economics. Which means that neoclassical demand theory seems a pretty coherent building with sound foundations.
It does not explain the downward sloping demand curve. It is only consistent with this curve. And in 1962 Gary Becker showed, in an article called ´Irrational behavior and economic theory´, that many models can ´explain´ a downward sloping demand curve when money is limited, including the throw of a dice. Ockhams razorblade requires us to use the simplest model… http://mcadams.posc.mu.edu/econ/Becker,%2520Irrational%2520Behavior.pdf
Becker himself seems not to have grasped the implications of his article, which shredded neoclassical demand theory. Accepting that demand curves very often slope downwards does not mean that one has to accept utility theory.
My paper (http://www.paecon.net/Fullbrook/IntersubjectiveTheoryofValue.pdf) “An Intersubjective Theory of Value”, in Intersubjectivity in Economics: Agents and Structures, editor Edward Fullbrook. London and New York: Routledge, 2002, pp. 273-299, includes a subsection on the theoretical anomaly noted by Gary Becker in his 1962 paper. But that anomaly had been noted with somewhat greater depth and sophistication by Irving Fisher in 1920. Neither economist, however, was capable of understanding the profound significance of the anomaly, because to do so requires a bit of abstract algebra, which, unfortunately, is not part of the economist’s standard tool kit. Below is the relevant section form my 2002 paper. The first part of that paper includes a gentle introduction to the mathematical ideas missing from Fisher’s book and Becker’s paper. Read more…
from David Ruccio
We all know that economic inequality has reached grotesque, even obscene, levels around the world. And the gap between a tiny group at the top and everyone else continues to grow.
But is inequality a human rights concern?
As Ignacio Saiz and Gaby Oré Aguilar [ht: ms] explain, the ongoing debates about inequality
have rarely made reference to human rights. In turn, the human rights community has paid very little attention to economic inequality. While inequality on grounds such as gender, race and disability have long been core human rights concerns, gross inequalities in economic status remain largely unchallenged by human rights law and advocacy.
The question then is, is it possible or even desirable to make inequality a central concern of the global human-rights movement? Read more…
from Lars Syll
Despite the rise of behavioral economics, many economists still believe that utility maximization is a good explanation of human behavior. Although evidence from experimental economics and elsewhere has rolled back the assumption that human agents are entirely self-interested, and shown that altruism and cooperation are important, a prominent response has been to modify individual preference functions so that they are “other-regarding”. But even with these modified preference functions, individuals are still maximizing their own utility.
Defenders of utility maximization rightly reject critical claims that specific behavioral outcomes undermine this assumption. They do not. But this is a sign of weakness rather than strength. The problem is that utility maximization is unfalsifiable as an explanation of behavior. As I show more fully in my 2013 book entitled From Pleasure Machines to Moral Communities, utility maximization can fit any real-world evidence, including behavior that appears to suggest preference inconsistency.
But note that utility maximization is not a tautology. Tautologies are true by assumption or definition. Utility maximization is not a tautology because it is potentially false. But empirically it is unfalsifiable.
Where does that leave us? Utility maximization can be useful as a heuristic modelling device. But strictly it does not explain any behavior. It does not identify specific causes. It cannot explain any particular behavior because it is consistent with any observable behavior. Its apparent universal power signals weakness, not strength.
Interesting post from one of yours truly’s favourite economists. Read more…
from Steve Keen
For decades, some of the most important data about market economies was simply unavailable: the level of private debt. You could get government debt data easily, but (with the outstanding exception of the USA—and also Australia) it was hard to come by.
That has been remedied by the Bank of International Settlements, which now publishes a quarterly series on debt—government & private—for over 40 countries. This data lets me identify the seven countries that, on my analysis, are most likely to suffer a debt crisis in the next 1-3 years. They are, in order of likely severity: China, Australia, Sweden, Hong Kong (though it might deserve first billing), Korea, Canada, and Norway.
I’ve detailed the logic behind my argument too many times to count, and I won’t repeat it here (if you want to check it out, try this Forbes post on Krugman, this one on money, this one on the Fed, or this one on our dysfunctional monetary system). The bottom line is that private sector expenditure in an economy can be measured as the sum of GDP plus the change in credit, and crises occur when (a) the ratio of private debt to GDP is large; (b) growing quickly compared to GDP. When the growth of credit falls—as it eventually must, as growing debt servicing exhausts the funds available to finance it, new borrowers baulk at entry costs to house purchases, and numerous euphoric and Ponzi-based debt-financed schemes fail—then the change in credit falls, and can go negative, thus reducing demand rather than adding to it. Read more…
from Peter Radford
Economists, especially mainstream economists, often like to ignore the real world consequences of their theories. Instead they prefer to hide away pretending that their conversations and ideas leave no imprint on society, and that their simple little models are just representations designed to cut through the tangle of reality to get at some core truth. Only in the grand world of macroeconomics is this not true. There, economists love to strut about as if they hold the keys to universal insights untroubled by the somewhat ambiguous results their ideas appear to inflict on the rest of us.
The fact that there are economists on all three sides of any two sided argument ought be sufficient to let us know that their insights are a little vague, and highly dependent on each individual economists worldview. Economics, it seems sometimes, is little more that highly formalized opinion. Read more…
The graph below is from an open-access paper just published in Bulletin of the Atomic Scientists.
from Lars Syll
The desire in the profession to make universalistic claims following certain standard procedures of statistical inference is simply too strong to embrace procedures which explicitly rely on the use of vernacular knowledge for model closure in a contingent manner. More broadly, such a desire has played a vital role in the decisive victory of mathematical formalization over conventionally verbal based economic discourses as the proncipal medium of rhetoric, owing to its internal consistency, reducibility, generality, and apparent objectivity. It does not matter that [as Einstein wrote] ‘as far as the laws of mathematics refer to reality, they are not certain.’ What matters is that these laws are ‘certain’ when ‘they do not refer to reality.’ Most of what is evaluated as core research in the academic domain has little direct bearing on concrete social events in the real world anyway.
from Lars Syll
Trying to delineate the difference between ‘New Keynesianism’ and ‘Post Keynesianism’ — during an interview a couple of weeks ago — yours truly was confronted by the odd and confused view that Axel Leijonhufvud was a ‘New Keynesian.’ I wasn’t totally surprised — I had run into that misapprehension before — but still, it’s strange how wrong people sometimes get things.
The last time I met Axel, we were both invited keynote speakers at the conference “Keynes 125 Years – What Have We Learned?” in Copenhagen. Axel’s speech was later published as Keynes and the crisis and contains the following thought provoking passages: Read more…
from Dean Baker
A standard fear raised in Washington policy debates is that the development of robots and other forms of technology will displace tens of millions of workers, leaving much of the workforce without jobs. This is remarkable story both because it is not supported by any evidence, but also because it goes in the opposite direction of virtually all the main concerns raised in debates over economic policy.
The story of the rising robots should mean that we are seeing a rapid increase in the amount of output per hour of work. The logic is that the robots are doing work that humans used to do, so we have more output for every hour of human labor.
In fact, we are seeing the exact opposite. The rate of productivity growth, which measures output per hour of work, has slowed sharply in recent years. In the last five years productivity growth in the United States has averaged just 0.4 percent annually, the slowest five year stretch on record. This compares to a rate of close to 3.0 percent annually from 1995 to 2005, which was also the rate of productivity growth during the long post World War II Golden Age, from 1947–73.
from Asad Zaman and the WEA Pedagogy Blog
Based on my analysis of Polanyi’s methodology, I have come to the conclusion that a radical economics textbook for the twenty first century can be based on a single CORE principle, which REVERSES conventional methodology. Conventional methodology, both heterodox and orthodox, considers economic theories to be a means to understanding economic events. INSTEAD, we should look at how theories emerged as people searched for explanations for emergent historical events. In particular, different theories would favor different group interests and the dominance of one theory over others would be dictated by the political power of the different groups For example, Polanyi shows how theories are generated by historical process — the emergence of the possibility of large scale production led to the emergence of market friendly theories.
This idea was lost from view because of the empiricist illusion that the facts by themselves are sufficient to determine theories. A great deal of creative energy goes into weaving a narrative around any given collection of facts, leaving a great deal of room for human agency, and for blending in class interests into a theory. Instead of using theories to understand economic processes, I would like to use historical context to understand the formation of economic theories. For example: read more
from Lars Syll
The unsellability of DSGE models — private-sector firms do not pay lots of money to use DSGE models — is one strong argument against DSGE.
But it is not the most damning critique of it.
To me the most damning critiques that can be levelled against DSGE models are the following two:
(1) DSGE models are unable to explain involuntary unemployment
In the basic DSGE models the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its ‘equilibrium value,’ the representative agent adjust her labour supply, so that when the real wage is higher than its ‘equilibrium value,’ labour supply is increased, and when the real wage is below its ‘equilibrium value,’ labour supply is decreased. Read more…
- Breast feeding is best, according to the World Health Organization
- Taking that (and therewith the need for long, paid, maternity leave) as a given, baby milk powder sheds an interesting light on the present price centered discussions about international trade: it’s not just about prices. And the present discussion does not take Schumpeterian entrepreneurs (and the regulatory state) serious enough. In 2008 the lack of apt government regulation in China led to the poisoning of 300.000 babies wich were bottle fed with baby milk contaminated with, mainly, melamine. This was of course a boon for foreign high-end baby milk producers, which saw they exports to China soar. Recently, however, China is increasing safety standards for (among other food products) dairy products. This drive includes rigid rules for imported baby milk.
- This caused the Dutch/Chinese/Taiwanese baby milk producer Ausnutria Hypocra to write down 11 million Euro of inventories of already produced baby food. This persuaded them also, however, to beef up investments in a new plant which they already were building to 100 million, to enable this plant to produce baby milk powder (a very complicated product) which complied with the new regulations not only based on cows’ milk but also on goat milk (which contains less allergens).
from Asad Zaman
How Robert Lucas ridiculed the unemployed and defined the concept of unemployment out of existence.
I’m investigating differences between statistical and model definitions of economic variables. While doing this I hit on the article ‘Understanding business cycles‘ by Nobel prize winner Robert Lucas. This article aims to demolish chapter II of Keynes’ General Theory and especially the concept of involuntary unemployment. What a mess. Some points:
A) Lucas starts by embracing the investigations of the business cycle by Mitchell – but conveniently ‘forgets’ to mention that unemployment, a prime interest of Mitchell, was a core element of such investigations.
B) He states about chapter II of the general theory: Read more…