from Lars Syll
Twenty years ago, yours truly had an article in History of Political Economy (no. 25, 1993) on revealed preference theory.
Paul Samuelson wrote a kind letter and informed me that he was the one who had recommended it for publication. But although he liked a lot in it, he also wrote a comment — published in the same volume of HOPE — saying:
Between 1938 and 1947, and since then as Pålsson Syll points out, I have been scrupulously careful not to claim for revealed preference theory novelties and advantages it does not merit. But Pålsson Syll’s readers must not believe that it was all redundant fuss about not very much.
I came to think about this little episode when, prepairing for a lecture on the law of demand, I re-read Stanley Wong’s minor classic on Samuelson’s revealed preference theory. And I have to admit I still find the theory much fuss about not very much. Read more…
from Lars Syll
Earlier this autumn yours truly was invited to participate in the New York Rethinking Economics conference. A busy schedule didn’t allow me to “go over there.” Fortunately some of the debates and presentations have been made available on the web, as for example here . Listening a couple of minutes into that video one can hear Paul Krugman strongly defending the loanable funds theory.
Unfortunately this is not an exception among “New Keynesian” economists.
Neglecting anything resembling a real-world finance system, Greg Mankiw — in the 8th edition of his intermediate textbook Macroeconomics — has appended a new chapter to the other nineteen chapters where finance more or less is equated to the neoclassical thought-construction of a “market for loanable funds.”
On the subject of financial crises he admits that Read more…
from Maria Alejandra Madi and the WEA Pedagogy Blog
Global business has been overwhelmed by the financialisation of wealth. Beyond financial and “rationalization” strategies, social conflicts and tensions have been strengthened as labor relations need to be adjusted to capital mobility and short-run returns. In this historical setting, it is worth noting that, in spite of the enormous literature on financial development and inequality, few attempts have been successful in rethinking the intersection between contemporary financial and labor markets in Economics Curriculum.
Indeed, in the current context of “institutionalized short-termism”, the expansion of global finance contributes to the redefinition of labor relations. Investors and managers have enlarged profits in the context of a business model that favors downsizing and cost reduction at the expense of employment. As labor costs are frequently considered large expense items, corporations must tightly managed and documented those costs in order to minimize risk of non-compliance, particularly public companies. Accordingly the Global Labor Union IUF, the current global business scenario fosters changing working conditions that result from: read more
The technological development process that allows electronic transaction instead of exchanges using physical currency, has the same merciless and irreversible character as the advent of the electronic calculator in the 70s and digital photography in the 90s: it meant the unavoidable death of the slide rule (then) and photographic film (more recently). Based on the nature of technological innovations and the market economy’s exploitation of such, we may predict the death of physical currency; bills and coins. It is probably a question of when, not if, this will take place. This paper will discuss some positive possibilities for reform of the financial and monetary system that emerge as a side effect of the unstoppable advances of technology in this field.
A modern financial system consists of a Central Bank (CB) and an extensive network of private financial units. The role of a CB has up to this day been as an interest-rate setter behind the scenes and – in crisis – “lender of last resort” for the network of private licensed (“commercial”) banks and non-bank financial institutions (NBFIs).
The commercial bank network has historically been quite dense, with branches of competing banks within a reasonable distance from customers. The reasons for this geographical diversity has been twofold:
from Neva Goodwin
Adam Smith, generally regarded as the begetter of modern economic theory, stressed issues of growth and distribution, based on an image of smoothly functioning markets. The pieces of Smith’s legacy that remained significant for what I will refer to as 20th century economics (though I will focus especially on the second half of the past century) were the emphasis on growth, and admiration for markets. This truncated legacy greatly reduced the emphasis on distribution, while also missing Smith’s concern that markets might not always function optimally. He especially pointed to monopolistic behavior as a problem, and supported various kinds of government intervention to keep the market on track. Ignoring these caveats, 20th century economists pursued the optimistic program of modeling a world in which perfect markets lead to optimum social outcomes.
The classical economists – those holding the stage approximately until Marshall’s time – also included Karl Marx, whose concerns for inequality and class conflict were shared by Smith (though they expressed themselves very differently). Read more…
from Lars Syll
Paul Krugman has often been criticized by people like yours truly and other Minskyites for getting things pretty wrong on the economics of Hyman Minsky.
When Krugman has responded to the critique, by himself rather gratuitously portrayed as about “What Minsky Really Meant” or “What Keynes Really Meant,” the over all conclusion is — “Krugman Doesn’t Care.”
The reason given for this rather debonair attitude seems to be that history of economic thought may be OK, but what really counts is if reading Minsky — or Keynes — give birth to new and interesting insights and ideas. Economics is not religion, and to simply refer to authority is not an accepted way of arguing in science.
Although I have a lot of sympathy for Krugman’s view on authority, there is a somewhat disturbing and unbecoming coquetting in his attitude towards the great forerunners he is discussing — as his rather controversial speech at Cambridge, commemorating the 75th anniversary of Keynes’ General Theory, bears evidence of.
Sometimes — and this goes not only for children — it is easier to see things if you can stand on the shoulders of elders and giants. If Krugman took his time and really studied Keynes and Minsky, I’m sure even he would learn a lot. Read more…
from today’s Guardian
Surging carbon dioxide levels have pushed greenhouse gases to record highs in the atmosphere, the World Meteorological Organisation (WMO) has said.
Concentrations of carbon dioxide, the major cause of global warming, increased at their fastest rate for 30 years in 2013, despite warnings from the world’s scientists of the need to cut emissions to halt temperature rises.
Experts warned that the world was “running out of time” to reverse rising levels of carbon dioxide (CO2) to tackle climate change.
Data show levels of the gas increased more between 2012 and 2013 than during any other year since 1984, possibly due to less uptake of carbon dioxide by ecosystems such as forests, as well as rising CO2 emissions.
The annual greenhouse gas bulletin from the WMO showed that in 2013 concentrations of CO2 in the atmosphere were 142% of what they were before the Industrial Revolution.
Other potent greenhouse gases have also risen significantly, with concentrations of methane now 253% and nitrous oxide 121% of pre-industrial levels.
Between 1990 and 2013 the warming effect on the planet known as “radiative forcing” due to greenhouse gases such as CO2 rose by more than a third (34%).
from Lars Syll
Modern probabilistic econometrics relies on the notion of probability. To at all be amenable to econometric analysis, economic observations allegedly have to be conceived as random events.
But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?
In probabilistic econometrics, events and observations are as a rule interpreted as random variables as if generated by an underlying probability density function, and a fortiori – since probability density functions are only definable in a probability context – consistent with a probability. As Haavelmo (1944:iii) has it:
For no tool developed in the theory of statistics has any meaning – except , perhaps for descriptive purposes – without being referred to some stochastic scheme.
When attempting to convince us of the necessity of founding empirical economic analysis on probability models, Haavelmo – building largely on the earlier Fisherian paradigm – actually forces econometrics to (implicitly) interpret events as random variables generated by an underlying probability density function.
This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating machine or a well constructed experimental arrangement or “chance set-up”. Read more…
Economics textbooks are not only written for students. At two critical points in the history of economic thought textbooks have played significant roles in defining the field, not only for what is taught, but more importantly (in terms of real world outcomes) for the understanding of the economy that is used by politicians, policy makers, and the public, when it votes its approval or disapproval of how the government is affecting the economy.
This started in the 1890s, when Alfred Marshall wrote the first edition of his text, called Principles of Economics. It went through 8 editions, the last being published in 1920. For a large part of the English-speaking world Marshall’s textbook continued to define the field (especially the microeconomics basics) until the middle of the 20th century, when it was replaced by Paul Samuelson’s Economics (first published in 1948). That set the standard for about the next 60 years.
Virtually all economies are currently growing both physically and financially, within a global envelope that is finite, non-growing and materially closed. A prevailing view, such as within the OECD and UNEP, is that the physical growth of throughput can be decoupled from the non-physical (financial) growth of GDP through innovation, which is commonly branded as “green growth” or “sustainable growth”. This view is also reflected, for example, in policy proposals for the next United Nations Climate Change Conference that emphasize decoupling emissions from growth (European Commission 2014). Two forms of decoupling are discussed in the literature: With relative decoupling, the growth of environmental impacts slows down relative to GDP due to efficiency improvements. With absolute decoupling, the environmental impact decreases as GDP grows.
To perpetuate a growing GDP under conditions of absolute biophysical limits will require—it is argued—compensation in terms of absolute decoupling of both the inflows from and the outflows into the environment. Relative decoupling will not suffice; it will merely delay the point in time when one or more limits are reached. Moreover, absolute decoupling will have to be achieved on a global scale, because improvements in one part of the world might be achieved when production and associated ecological impacts are moved offshore.
If we give up the assumption (2) that agents have no power over the states of the world, and consider that there is a time interval between the action taken and the outcome, other options become available for them. An individual (or a firm) can know what the impact on the world will be of other actions, taken after what we call here the “decision” in the strict sense has been adopted. Let’s suppose a lapse of time divided into two periods, t0 – t1 and t1 – t2. In t0 a subject takes the decision a1, assuming the prevailing state of the world along t0 – t2 will be S1 (the one in which c1 is expected). But then, he does not stay idle, but undertakes some additional actions b1…… bn, designed to produce (or help to create) the needed state S1. These actions, additional to (and successive of) the initial decision, are aimed at the transformation of reality in a precise way in order to get de desired result. We may call them validating actions.
A good example of validating action is propaganda, which tries to install at the top of the agents’ preferences a product whose production has already been decided (or has already been finished).
There are clear signs about what needs to be done to diminish the effect that financialization has on income distribution. It is obvious that the solution to the problem of excessive and wildly mal-distributed incomes is not to set up ethics courses for MBA students at Harvard, London, the Chicago Business School, and elsewhere (Locke, 2011b). Solutions require the adoption of new public policies and legal-institutional change. They involve politics and are about grasping power. Nor should political control be sought primarily in underdeveloped and/or developing countries, where financialization wreaks havoc. The West is not driven by some financialization monolith; there are strong advanced economies, as the German example shows, and a political base, even within the business community, that is ready to oppose this juggernaut. To choose is simple: If people want to keep out undesirables from their community why just pass anti-immigrant or vagrancy laws; they need also to stop rich financial interlopers in private equity firms from buying local firms and using bankruptcy statutes to deprive employees of their pension and benefit plans. They also need, like the Germans do, to give employee representatives on supervisory boards a voice in setting the salaries of top management and in firm governance, so that they can resist acquisitions and takeovers. It won’t be easy; witness American workers’ (under intense pressure from Republicans and the business community) recent rejection of the union at Volkswagen’s plant in Tennessee, which spoiled the company’s attempt to introduce a works council in the plant (Volkswagen is fully unionized with works councils included in its governance everywhere in its worldwide operations, except Tennessee).
For industrial systems, a low throughput of matter and energy implies a smaller ecological footprint and greater life expectancy and durability of goods and infrastructure; a high throughput implies more depletion of resources that will need to be renewed and more waste that will need to be disposed of (Meadows and Wright 2008). System dynamics and thermodynamics tell us that a tolerable rate of throughput and entropic transformation is ultimately dictated by the natural system, not by economics or engineering.
A possible task for engineering, within limits, would be to maximise the durability of stocks by minimising inflows of low entropy natural resources and by minimising outflows of high entropy waste and emissions. The role that industrial societies have assigned to technology is, however, much more Herculean. We have asked it to simultaneously and boundlessly minimise environmental impacts and maximise economic growth. In 1966, Kenneth Boulding suggested: “We are very far from having made the moral, political, and psychological adjustments which are implied in this transition from the illimitable plane to the closed sphere” (Boulding 1966: 2-3). How far are we now, almost half a century later?
We have indeed come round in a circle. The whole vision of the working of the macrosystem presented, in terms of the AD/AS model, by far too many contemporary textbooks, is essentially pre-Keynesian. Monetary spending may fluctuate, but whether or not such fluctuations affect employment and output is said to depend on reactions affecting real wages. Slow adjustment of money wages to price changes is held to account for cyclical variations in employment and output. With respect to the longer term, it is presumed that real wages return to their proper full-employment level. There are then no obstacles on the side of demand to prevent re-establishment of the ‘natural’ (full employment) level of activity. The pale shadow of Keynesian theory in the ADAS model – the AD curve – has nothing to do with the values of output and employment at equilibrium, only with the price level.
The core of my argument is that many sequences of events that are presented as mechanisms (i.e., as sequences of events organized in a stable way and leading to results known beforehand) in theoretical models are actually socially constructed by the presence (often tacit) of regulations and institutions that eliminate otherwise alternative options. My argument is against the alleged naturalness of social sequences modeled within theoretical models. These sequences do not reflect social laws (like physical laws), or mechanisms in the usual sense of the term (used in current mechanismic literature). When they are represented within theoretical models, they are not much more than modeled representations of truncated processes, which are open-ended in reality. Theoretical mechanisms are obtained assuming as “natural” and given (i.e., unchangeable as a matter of principle) institutional features that are actually historically determined and perfectly modifiable.
from Lars Syll
I’ve never yet been able to understand why the economics profession was/is so impressed by the Arrow-Debreu results. They establish that in an extremely abstract model of an economy, there exists a unique equilibrium with certain properties. The assumptions required to obtain the result make this economy utterly unlike anything in the real world. In effect, it tells us nothing at all. So why pay any attention to it? The attention, I suspect, must come from some prior fascination with the idea of competitive equilibrium, and a desire to see the world through that lens, a desire that is more powerful than the desire to understand the real world itself. This fascination really does hold a kind of deranging power over economic theorists, so powerful that they lose the ability to think in even minimally logical terms; they fail to distinguish necessary from sufficient conditions, and manage to overlook the issue of the stability of equilibria.
Almost a century and a half after Léon Walras founded neoclassical general equilibrium theory, economists still have not been able to show that markets move economies to equilibria. Read more…
Smith’s commitment to “equity” for the working class was behind the vehemence of his opposition to mercantilist (“business economics”) arguments for policies that would protect or promote the profits of producers and intermediaries. Smith saw such pro-business arguments—which arguably persist as the core of neoliberalism (Harvey 2007)—whether for direct subsidies or competition-restricting regulations, as an intellectually bankrupt and often morally corrupt rhetorical veil for what were actually “taxes” upon the poor (what we now call “rents”). Such taxes are unjust and outrageous because they violate fair play both in the deceptive rhetoric by which they are advanced and by harming the interests of one group in society (generally, the poor and voiceless) to further the interests of another (unsurprisingly, the rich and politically connected). Smith explicitly moralised the point,
To hurt in any degree the interest of any one order of citizens, for no other purpose but to promote that of some other, is evidently contrary to that justice and equality of treatment which the sovereign owes to all the different orders of his subjects (WN IV.viii.30).
from Lars Syll
There have been over four decades of econometric research on business cycles … The formalization has undeniably improved the scientific strength of business cycle measures …
But the significance of the formalization becomes more difficult to identify when it is assessed from the applied perspective, especially when the success rate in ex-ante forecasts of recessions is used as a key criterion. The fact that the onset of the 2008 financial-crisis-triggered recession was predicted by only a few ‘Wise Owls’ … while missed by regular forecasters armed with various models serves us as the latest warning that the efficiency of the formalization might be far from optimal. Remarkably, not only has the performance of time-series data-driven econometric models been off the track this time, so has that of the whole bunch of theory-rich macro dynamic models developed in the wake of the rational expectations movement, which derived its fame mainly from exploiting the forecast failures of the macro-econometric models of the mid-1970s recession.
The limits of econometric forecasting has, as noted by Qin, been critically pointed out many times before.
Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica – assessed the the role of econometrics in the advancement of economics, and although mainly positive of the “repair work” and “clearing-up work” done, Haavelmo also found some grounds for despair: Read more…
from Lars Syll
In Andrew Gelman’s and Jennifer Hill’s Data Analysis Using Regression and Multilevel/Hierarchical Models, the authors list the assumptions of the linear regression model. On top of the list is validity and additivity/linearity, followed by different assumptions pertaining to error charateristics.
Yours truly can’t but concur, especially on the “decreasing order of importance” of the assumptions. But then, of course, one really has to wonder why econometrics textbooks — almost invariably — turn this order of importance upside-down and don’t have more thorough discussions on the overriding importance of Gelman/Hill’s two first points …
Since econometrics doesn’t content itself with only making “optimal predictions,” but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — and most important of these are validity and additivity.
Let me take the opportunity to cite one of my favourite introductory statistics textbooks on one further reason these assumptions are made — and why they ought to be much more argued for on both epistemological and ontological grounds when used (emphasis added): Read more…
from Lars Syll
Almost a century and a half after Léon Walras founded neoclassical general equilibrium theory, economists still have not been able to show that markets move economies to equilibria.
We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. After reading Franklin M. Fisher‘s masterly paper The stability of general equilibrium: results and problems one however has to ask oneself — what good does that do? Read more…