from Lars Syll
Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump” argument – susceptible to being ruined by some clever “bookie”.
Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here and here) there is no strong warrant for believing so.
In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure. Read more…
from Dean Baker
Federal Reserve Board Chair Janet Yellen made waves in her Congressional testimony last week when she argued that social media and biotech stocks were over-valued. She also said that the price of junk bonds was out of line with historic experience. By making these assertions in a highly visible public forum, Yellen was using the power of the Fed’s megaphone to stem the growth of incipient bubbles. This is an approach that some of us have advocated for close to twenty years.
Before examining the merits of this approach, it is worth noting the remarkable transformation in the Fed’s view on its role in containing bubbles. Just a decade ago, then Fed Chair Alan Greenspan told an adoring audience at the American Economic Association that the best thing the Fed could do with bubbles was to let them run their course and then pick up the pieces after they burst. He argued that the Fed’s approach to the stock bubble vindicated this route. Apparently it did not bother him, or most of the people in the audience, that the economy was at the time experiencing its longest period without net job growth since the Great Depression.
The Fed’s view on bubbles has evolved enormously. Most top Fed officials now recognize the need to take steps to prevent financial bubbles from growing to the point that their collapse would jeopardize the health of the economy. However there are two very different routes proposed for containing bubbles. Read more…
from Lars Syll
Along with the Arrow-Debreu existence theorem and some results on regular economies, SMD theory fills in many of the gaps we might have in our understanding of general equilibrium theory …
It is also a deeply negative result. SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium. It has been difficult to make progress on the elaborations of general equilibrium theory that were put forth in Arrow and Hahn 1971 …
Given how sweeping the changes wrought by SMD theory seem to be, it is understand-able that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory. These views are widely shared. Bliss (1993, 227) wrote that the “near emptiness of general equilibrium theory is a theorem of the theory.” Mas-Colell, Michael Whinston, and Jerry Green (1995) titled a section of their graduate microeconomics textbook “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem.” There was a realization of a similar gap in the foundations of empirical economics. General equilibrium theory “poses some arduous challenges” as a “paradigm for organizing and synthesizing economic data” so that “a widely accepted empirical counterpart to general equilibrium theory remains to be developed” (Hansen and Heckman 1996). This seems to be the now-accepted view thirty years after the advent of SMD theory …
And so what? Why should we care about Sonnenschein-Mantel-Debreu? Read more…
from Thomas Palley
There is an old story about a policeman who sees a drunk looking for something under a streetlight and asks what he is looking for. The drunk replies he has lost his car keys and the policeman joins in the search. A few minutes later the policeman asks if he is sure he lost them here and the drunk replies “No, I lost them in the park.” The policeman then asks “So why are you looking here?” to which the drunk replies “Because this is where the light is.”That story has much relevance for the economics profession’s approach to the Phillips curve.
The question triggering the discussion is can Phillips curve (PC) theory account for inflation and the non-emergence of sustained deflation in the Great Recession? Four approaches are considered: (1) the original PC without inflation expectations; (2) the adaptive inflation expectations augmented PC; (3) the rational inflation expectations new classical vertical PC; and (4) the new Keynesian “sluggish price adjustment” PC that embeds a mix of lagged inflation and forward looking rational inflation expectations. The conclusion seems to be the original PC does best with regard to recent inflation experience but, of course, it fails with regard to past experience.
There is another obvious explanation that has been over-looked by mainstream economists for nearly forty years because they have preferred to keep looking under the “lamppost” of their conventional constructions. That alternative explanation rests on a combination of downward nominal wage rigidity plus incomplete incorporation of inflation expectations in a multi-sector economy. Read more…
from Lars Syll
Last year Dirk Ehnts had an interesting post up where he took Paul Krugman to task for still being married to the loanable funds theory.
Unfortunately this is not an exception among “New Keynesian” economists.
Neglecting anything resembling a real-world finance system, Greg Mankiw — in the 8th edition of his intermediate textbook Macroeconomics — has appended a new chapter to the other nineteen chapters where finance more or less is equated to the neoclassical thought-construction of a “market for loanable funds.”
On the subject of financial crises he admits that
perhaps we should view speculative excess and its ramifications as an inherent feature of market economies … but preventing them entirely may be too much to ask given our current knowledge.
This is of course self-evident for all of us who understand that both ontologically and epistemologically founded uncertainty makes any such hopes totally unfounded. But it’s rather odd to read this in a book that bases its models on assumptions of rational expectations, representative actors and dynamically stochastic general equilibrium – assumptions that convey the view that markets – give or take a few rigidities and menu costs – are efficient! For being one of many neoclassical economists so proud of their (unreal, yes, but) consistent models, Mankiw here certainly is flagrantly inconsistent! Read more…
from Lars Syll
Assumptions in scientific theories/models are often based on (mathematical) tractability (and so necessarily simplifying) and used for more or less self-evidently necessary theoretical consistency reasons. But one should also remember that assumptions are selected for a specific purpose, and so the arguments (in economics shamelessly often totally non-existent) put forward for having selected a specific set of assumptions, have to be judged against that background to check if they are warranted.
This, however, only shrinks the assumptions set minimally – it is still necessary to decide on which assumptions are innocuous and which are harmful, and what constitutes interesting/important assumptions from an ontological & epistemological point of view (explanation, understanding, prediction). Especially so if you intend to refer your theories/models to a specific target system — preferably the real world. To do this one should start by applying a Real World Filter in the form of a Smell Test: Is the theory/model reasonable given what we know about the real world? If not, why should we care about it? If not – we shouldn’t apply it (remember time is limited and economics is a science on scarcity & optimization …)
from Asad Zaman
Polanyi’s book is widely recognized as among the most deeply original and seminal analyses of the origins and effects of capitalism. In a previous post, I provided a brief summary of the main arguments of Polanyi. Polanyi does not explicitly discuss methodology, but his analysis is based on a methodology radically different from any currently in use in social sciences. This methodology could provide the basis for an entirely new approach to the subject matter. In my paper entitled The Methodology of Polanyi’s Great Transformation, I have articulated central elements of this methodology by showing how Polanyi uses them in his book. I provide a brief summary of the main ideas of the paper here.
Firstly note the Polanyi operates at a meta-theoretical level. The work analyzes emergence of theories as attempts to understand historical experience. This immediately leads to a historical context sensitive analysis, as opposed to current a-historical methods dominant in economics. In what is an extremely interesting twist, Polanyi argues that theories formulated by contemporaries to understand their experience are often wrong. Nonetheless, these theories are used to understand and shape responses to historical circumstances. This mechanism provides substantial room for human agency in influencing history. The key elements of Polanyi’s methodology, extracted from how he has utilized them in his book, are listed as follows: Read more…
from June Sekera
A year ago last May, the Real World Economics Review blog published my post, “Why Aren’t We Talking About Public Goods?” In that article I argued that we need to revive and reframe the concept of public goods. A concept of public goods is immensely important because:
- The absence of a widely-held, constructive idea of public goods in public discourse denies citizens the ability to have an informed conversation, or to make informed decisions, about things that matter mightily to the quality of their lives and their communities.
- Its absence robs public policy makers, leaders and managers of the concept that is most central to the reason for their being.
- The current economics definition of public goods feeds and supports the marketization and privatization of government, and the consequent undermining of governments’ ability to operate.
Since last May I have met with economists and other social scientists across the US and in the UK and have been in discussion with people responding to my post from several other countries. I have also been conducting further research.
In this post I summarize the results of my discussions and findings to date and offer for consideration some criteria for a possible “instrumental” definition of public goods. Ultimately, an instrumental definition of public goods must be accompanied by a concordant theory of non-market production in the public economy. Both are needed to ground an improved theory and practice of governance.
1. The Existing Definition and Its Inadequacies Read more…
from Lars Syll
The other day yours truly wrote re Krugman‘s dangerous neglect of methodological reflection:
The financial crisis of 2007-08 and its aftermath definitely shows that something has gone terribly wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable … Modern mainstream macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ … Mainstream macroeconomists … want to be able to use their hammer. They decide to pretend that the world looks like a nail and that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption–and the ensuing results are financial crises and economic havoc.
Now Brad DeLong earlier today commented on my critique:
Suppose we decide that we are no longer going to:
Pretend that agents — or economists — know the data-generating process…
Recognize that people are not terribly committed to Bayesianism -– that they do not model probabilities as if they have well-defined priors and all there is is risk…
What do we then do –- what kind of economic arguments do we make–once we have made those decisions?
“What do we then do?” The despair heard in the question reminds me of the first time I met Phil Mirowski. It was twenty years ago, and he had been invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defense. Being at a nonplus, one of them, in total desperation, finally asked “But what shall we do then?” Read more…
from John Weeks
Against all expectations an economics book became a best seller this year. I illustrate this unlikely occurrence with a true story. One day in London I hailed a taxi near the Houses of Parliament (the workers of the underground system were on strike). I mentioned to the driver that I taught economics at the University of London before retiring several years ago. The driver asked me, have you read this book by a Frenchman named Piketty?
A London taxi driver discussing an economics book 578 pages long (text only) with countless graphics and even a bit of algebra qualifies the book as a “phenomenon” by the dictionary definition, “a fact or situation that is observed to exist or happen, especially one whose cause or explanation is in question”. Very much in question the cause is. I am in the process of writing a review of these 578 pages (plus the occasional excursion into a footnote). At this point I limit myself to speculating over why it has swept all before it, especially since it is certain to be a book that many people buy and almost no one reads.
We find many reviews of Capitalism in the 21st Century (which I shorten to C21C), most from progressives, soft to hard left. The inequality deniers have yet to launch a frontal assault, though a recent blog entry for the Financial Times by Chris Giles is a shot from that direction (see Piketty’s reply). Prominent UK journalist Paul Mason succinctly dismisses the attempted hatchet job (here). Read more…
from Lars Syll
But I am unfamiliar with the methods involved and it may be that my impression that nothing emerges at the end which has not been introduced expressly or tacitly at the beginning is quite wrong … It seems to me essential in an article of this sort to put in the fullest and most explicit manner at the beginning the assumptions which are made and the methods by which the price indexes are derived; and then to state at the end what substantially novel conclusions has been arrived at …
I cannot persuade myself that this sort of treatment of economic theory has anything significant to contribute. I suspect it of being nothing better than a contraption proceeding from premises which are not stated with precision to conclusions which have no clear application … [This creates] a mass of symbolism which covers up all kinds of unstated special assumptions.
Letter from Keynes to Frisch 28 November 1935
from Lars Syll
How far the motives which I have been attributing to the market are strictly rational, I leave it to others to judge. They are best regarded, I think, as an example of how sensitive – over-sensitive if you like – to the near future, about which we may think that we know a little, even the best-informed must be, because, in truth, we know almost nothing about the more remote future …
The ignorance of even the best-informed investor about the more remote future is much greater then his knowledge … But if this is true of the best-informed, the vast majority of those who are concerned with the buying and selling of securities know almost nothing whatever about what they are doing … This is one of the odd characteristics of the Capitalist System under which we live …
It may often profit the wisest to anticipate mob psychology rather than the real trend of events, and to ape unreason proleptically … (The object of speculators) is to re-sell to the mob after a few weeks or at most a few months. It is natural, therefore, that they should be influenced by the cost of borrowing, and still more by their expectations on the basis of past experience of the trend of mob psychology. Thus, so long as the crowd can be relied on to act in a certain way, even if it be misguided, it will be to the advantage of the better informed professional to act in the same way – a short period ahead.
from Lars Syll
Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find mainstream economists that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight into their theories and models.
The standard view in economics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that “more is better.” But as Keynes argues – “more of the same” is not what is important when making inductive inferences. It’s rather a question of “more but different.”
Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w “irrelevant.” Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (“weight of argument”). Running 10 replicative experiments do not make you as “sure” of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same. Read more…
from Lars Syll
Alex Rosenberg — chair of the philosophy department at Duke University, renowned economic methodologist and author of Economics — Mathematical Politics or Science of Diminshing Returns? – had an interesting article on What’s Wrong with Paul Krugman’s Philosophy of Economics in 3:AM Magazine the other day. Writes Rosenberg: Read more…
from Lars Syll
Walked-out Harvard economist Greg Mankiw has more than once tried to defend the 0.1 % by invoking Adam Smith’s invisible hand:
[B]y delivering extraordinary performances in hit films, top stars may do more than entertain millions of moviegoers and make themselves rich in the process. They may also contribute many millions in federal taxes, and other millions in state taxes. And those millions help fund schools, police departments and national defense for the rest of us …
[T]he richest 1 percent aren’t motivated by an altruistic desire to advance the public good. But, in most cases, that is precisely their effect.
When reading Mankiw’s articles on the “just desert” of the 0.1 % one gets a strong feeling that Mankiw is really trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they “deserve.”
Where does this view come from? Most neoclassical economists actually have a more or less Panglossian view on unfettered markets, but maybe Mankiw has also read neoliberal philosophers like Robert Nozick or David Gauthier. The latter writes in his Morals by Agreement:
The rich man may feast on caviar and champagne, while the poor woman starves at his gate. And she may not even take the crumbs from his table, if that would deprive him of his pleasure in feeding them to his birds.
Now, compare that unashamed neoliberal apologetics with what three truly great economists and liberals — John Maynard Keynes, Amartya Sen and Robert Solow — have to say on the issue: Read more…
from David Ruccio
from Lars Syll
Mathematical statistician David A. Freedman‘s Statistical Models and Causal Inference (Cambridge University Press, 2010) is a marvellous book. It ought to be mandatory reading for every serious social scientist – including economists and econometricians – who doesn’t want to succumb to ad hoc assumptions and unsupported statistical conclusions! Read more…
Here is a free 800 page book from the World Economics Association
Paul D. Egan and Philip Soos
In Bubble Economics, Paul Egan and Philip Soos explore a depressed Australia in the 1840s, 1890s and 1930s. They detail recurrent patterns of boom-bust credit and asset cycles which heralded financial instability, particularly following speculation in commercial and residential land markets.A financial stability model is put forward to predict economic downturns which is based on Georgist, post-Keynesian and behavioural finance schools of economic thought, informed by data from 1830 to 2013. The trends in Australia’s current trade settings, residential property market and banking sector are ominously similar to the key precursors to Australia’s ‘Great Depression’ of the 1890s – a recession or depression may now be imminent. Egan and Soos expose ‘rentier economics’ in the land down under and discard the dominant neoclassical paradigm, bringing a fresh perspective to the intense debate about Australia’s economic future.
from Lars Syll
The task of this book is to explain wage rigidities … It seems reasonable to hope that a successful explanation of wage rigidity would contribute to understanding the extent of the welfare loss associated with unemployment and what can be done to reduce it … Many theories of wage rigidity and unemployment include partial answers to these questions as part of their assumptions, so that the phenomena of real interest … are described in the theories’ assumptions. For instance, Lucas concludes that increased unemployment during recessions implies little welfare loss … Lucas’s policy conclusions are not strongly supported … Good support can come only from information that distinguishes his microeconomic assumptions from others yielding different policy recommendations.
A fanciful example may illustrate the danger of taking too narrow a view of instrumentalism. You are an explorer seeking contact with the Dafs, an isolated tribe about which almost nothing is known. You observe one of their villages through binoculars from far away … You observe that every morning on sunny days, men wearing bright yellow hats stand in the backyards and make sweeping gestures toward the sky … When you finally arrange a meeting with some Dafs, you meet a few men with yellow hats and a few other plainer people. Believing the first to be leaders, you offer them presents, at which point all the Dafs are outraged and assault you. What you have not observed is that yellow hats mark slaves, who throw grain to the household chickens in the yard on sunny days and inside on rainy ones … Read more…
from Lars Syll
“Nominal wages are sticky”: Well, every piece of research I’ve seen on this subject … agrees that nominal wages are sticky, at least in the downward direction. But the kind of exogenous stickiness in most “New Keynesian” models doesn’t make a lot of sense. So this “undeniable truth” gets only a provisional pass, since the real “stickiness” might not affect the economy in the way “Keynesians” think.
“A lot of unemployment is involuntary”: The more you think about models of labor and unemployment, the more you realize that “voluntary” is not a well-defined term. But since many unemployed people definitely seem to think (correctly or incorrectly!) that they can’t find any sort of job, I’ll give this one a provisional pass as well, with the caveat that “involuntary” is defined in the mind of the unemployed person.