Home > Uncategorized > Input-output analysis as an alternative to ‘micro-founded’ models

Input-output analysis as an alternative to ‘micro-founded’ models

Can economists calculate carbondioxide emissions caused by one additional unit of consumption? Yes, they can. Can economists using ‘micro-founded (not!)’ neoclassical models do this? No, they can’t.

Which means that David Glasner does society a great service by, on his ‘uneasy money’ blog,  debunking Tony Yates, who shows off his professional ignorance by stating about the  micro-founded (not!) neoclassical macro economic models:

My own position is that these are the ONLY models that have anything genuinely economic to say about anything.”

The science of economics dumbed down to using the refuted indifference curve (for only one thing: google ‘Arrow paradox’).

Wow.

Let’s, for the sake of pluralism and proper science, consult a newsletter of the International Input-Output organisation. Input-output models are, contrary to the neoclassical models mentioned above, based upon well-defined data and concepts and enable us to answer questions like (quotes from the newsletter):

A) BALANCE SHEETECONOMICS OF THE SUBPRIME MORTGAGE CRISIS
As Copeland (1947, 1952) demonstrated with his money-flow accounts more than half a century ago, the balance sheets of economic entities are closely interrelated through lender/borrower relationship. This paper is an attempt to describe the U.S. subprime mortgage crisis in the framework of ‘balance sheet economics’, which was originally proposed by Stone (1966) and Klein (1977, 1983) (TSUJIMURA M. and TSUJIMURA K.)

No neglect of stocks (i.e. debts, among other things) and flows of money in input-output analysis. This alone makes the input-output models already superior to the neoclassical models.

B)THE LIFE CYCLE ENVIRONMENTAL IMPACTS OF CONSUMPTION
This paper reviews assessments of environmental impacts arising from consumption, taking into account the production and disposalof goods consumed (HERTWICH E.)

No neglect of stocks and flows of materials which, if not properly re-used, might damage the environment! This alone makes these input-output models already superior to the neoclassical models More on this in the next article mentioned:

C) AGGREGATION VERSUS DISAGGREGATION IN INPUT-OUTPUT ANALYSIS OF THE ENVIRONMENT
Analysts carrying out input-output analyses of environmental issues are often plagued by environmental and input-output data existing in different classifications, with environmentally sensitive sectors sometimes being aggregated in the economic input-output database. In principle there are two alternatives for dealing with such misalignment: Either environmental data have to be aggregated into the input-output classification which entails an undesirable loss of information, or input-output data have to be disaggregated based on fragmentary information. I show that disaggregation of input-output data, even if based on few real data points, is superior to aggregating environmental data in determining input-output multipliers (LENZEN M.)

Oops, the ‘M’ word is mentioned, the ‘multiplier’. Multiplier-estimation using input-output models is, as it is based upon much more disaggregated models than the models used by for instance Olivier Blanchard, superior to the neo-classical models. In input-output models (as for instance used by Eurostat) it matters if one additional Euro is spent  on buildings or employing new government employees (a practical point: a construction bust of a billion Euro in the EZ will according to this graph ‘out-multiplier’ a billion of additional exports or government or tourism or whatever spending!)

Output multipliers for the EU and the USA, 2009, for product groups (source: Eurostat)

Output_multipliers_for_the_EU_and_the_US,_2009,_by_products'_groups

  1. January 4, 2014 at 11:39 am

    I am very surprised to see you offer ‘input output analysis’ as an ‘alternative’. Input-output data are interesting phenomena for theoretical models to explain, not alternatives. Standard representative yeoman farmer models are obviously refuted by or silent about input output data. But many microfounded models are not. See, for example, Acemoglu et al’s Econometrica paper on the network origins of aggregate fluctuations, and the references therein, but there are many others too. It’s an open question whether it’s possible to address some issues without a model that articulates multiple sectors with a rich input output structure. I guess for some questions you can, and for some you can’t. To take an extreme example, the RBC model doesn’t explain the weather, but we don’t usually think of that as one of its many problems.

    You incorrectly judge me to view only ‘neoclassical’ models as acceptable. Not true. Most of my papers use (other people’s) sticky price business cycle models, which are not usually dubbed ‘neoclassical’, but, instead ‘New Keynesian’, since they provide one story for how business cycles are inefficient and how macro policy can help iron them out.

    • merijnknibbe
      January 4, 2014 at 8:18 pm

      I’ve asked main stream economists serveral times to explain the concept ‘utility’ in the social indifference curves of the supposedly micro founded models (which, as they are not based on the aggregation of micro data, are not micro founded at all) to me. What’s the concept? How are decisions taken? How is it operationalized? Is there any conceptual reason why it is supposed to behave in a way which can be captured by a Cobb-Douglas function? Who measures it? The BEA? The BLS? There are supposed to be all kind of intertemporal contracts, who administers these? Where can I find the manual which states how it has to be estimated? How is the sector households defined (are jails part of this sector)? Nobody has ever been able to give me even a start of an answer, it’s an articl of faith. Can you answer these questions?

      • January 6, 2014 at 9:29 am

        You ignore my response and then try another line of attack. Well, these questions are silly. Have you encountered neuroeconomics? Have a look at Paul Glimcher’s textbook on this. There are dozens of scholars investigating/refining/refuting the axiomatic foundations of utility theory by trying to measure the activity in the brain.
        However, no-one needs to measure utility in order to operationalise microfounded macro or asset pricing data. Given assumptions about utility, we can derive predictions about the laws of motions of series that the BEA do collect. And then refute the models, and make progress that way.

  2. January 4, 2014 at 2:17 pm

    Another thing: why do you say debt or flows of money are neglected in microfounded models? They are in some, and they aren’t in others.

    • merijnknibbe
      January 4, 2014 at 8:03 pm

      I should have been clearer. I’ve encountered a patient and an impatient consumers in so called ‘micro founded’ models, which lend and borrow to/from each other. However, this is, at least in the models I’ve read, based on the empirically totally refuted loanable funds theory.

      ‘Loanable funds’ is totally at odds with money in the real world as any manual on monetary statistics will teach you (please, read something about how the stuff you’re writing about is measured!). There is not something like a ‘loanable fund’, a restricted amount of money created by the government. In our system money creation is large and by outsourced to the private sector. And this private sector has created trillions and trillions of dollars and Euros based upon ever increasing house prices, prices which could increase because trillions of dollars and euros were created. No loanable funds where needed as the loans created deposits as well as higher houwe prices! You might want to read this post about the insignificance of central banks when it comes to money creation! https://rwer.wordpress.com/2012/01/26/central-bankers-were-all-post-keynesians-now/

      • January 6, 2014 at 9:36 am

        This reply is totally confused. You are probably talking about an encounter with the Bernanke-Gertler model. The fact that private instruments possess money-like quantities does not ‘refute’ models of real credit. You mistake the purpose of that particular model. It contains all kinds of things which ar clearly abstractions. There is no money of any sort in that model, private or public. The point is to articulate the effects of net worth on a borrower’s cost of funds. One small part of the story. There are other papers that study money like qualities of private instruments. Kiyotaki-Moore, for example. Also see the work of Brunnermeir.
        But all this is by the by: the real point is that your many and varied dissatisfactions with papers you have encountered don’t constitute legitimate grounds for rejecting microfoundations. They just constitute (in my opinion entirely spurious and misinformed, but leave that aside) grounds for building better ones.

  3. BFWR
    January 4, 2014 at 4:02 pm

    The Balance sheet is indeed the place to look for significant data. There you will find the various costs of any and every enterprise as well as the amounts of individual incomes they create, and the relationship between these two figures. That’s the enterprise’s total costs (which is synonymous with total prices that it needs to liquidate if it s is to be solvent) and the total individual incomes simultaneously created to liquidate those costs. That is the most basic and important metric to seek on both the micro and macro level. Why? Because it is the most basic picture of a monetary economy itself, i.e. exchange of individual incomes for goods/services/prices. This metric also shows us where all costs are summed (retail sale to an individual) which enables one to cut through the mass of data and assess the economic effects at the true end of costs. It also enables us to understand that when money enters the economy there is no neglecting the element of cost/price. This metric enables us to seamlessly aggregate and assess the most basic/elemental state of any business and also the entire economy. And that is a radical state of monetary scarcity of individual incomes in ratio to prices.

  4. Podargus
    January 4, 2014 at 7:11 pm

    Indeed,all clever stuff. But whether one breed of economist can predict something another can’t in regard to climate change is knowledge that is about as useful as tits on a bull.

    It seems to me that most economists,whatever creed they profess to follow,are fully paid up members of the Growth At Any Cost Club (aka dinosaurs). As such,like any religion, they are part of the problem,not part of the solution.

  5. merijnknibbe
    January 6, 2014 at 1:41 pm

    #3 Thanks for your reply.

    As to the question about neuroeconomics: I’ve read a little neurology. And as far as I’m concerned, the homo economicus clearly is an invention of the ‘left brain interpreter’ of classical economists. One of the main findings of brain imaging seems to be that there exists something called the ‘brain interpreter’. According to wikipedia (I’ve read more, but wikipedia is often very quotable):

    “the left brain interpreter refers to the construction of explanations by the left brain in order to make sense of the world by reconciling new information with what was known before.[1] The left brain interpreter attempts to rationalize, reason and generalize new information it receives in order to relate the past to the present”

    and:

    ” the left brain interpreter can be seen as the glue that attempts to hold the story together, in order to provide a sense of coherence to the mind.[3] In reconciling the past and the present, the left brain interpreter may confer a sense of comfort to a person, *by providing a feeling of consistency and continuity in the world. This may in turn produce feelings of security that the person knows how “things will turn out” in the future* .

    However, the facile explanations provided by the left brain interpreter may also enhance the opinion of a person about themselves and produce strong biases which prevent the person from seeing themselves in the light of reality and repeating patterns of behavior which led to past failures.[2] The explanations generated by the left brain interpreter may be balanced by right brain systems which follow the constraints of reality to a closer degree.”

    Especially the part which start with ‘by providing’ seems, to me, and apt description of at least part of the reasons why so many economists love the ‘homo economicus’, it’s about the definition of ‘ergodic’! One of the great joys of statistics is that they often shows you that you were wrong – who would have expected the lasting decline of the activity rate in the USA, the unprecedented lasting drop in UK productivity or the about 30% unemployment rates in Spain and Greece! Statistics force us to acknowledge these data.

    And about the measurement of utility: one of the greatest failures of mainstream economics has of course been the very measurement of utility. Paul Samuelson was acutely aware of this and tried ‘revealed preference’. Despite all efforts, however, there are (for good reasons, as actual human choices are, as all marketeers know, often time-inconsihestent) there are still no standard ways to estimate of revealed preference even in laboratory circumstances, let alone in real life situations. Even Hall Varian concedes, in a book called ‘Samuelsonian economics and the twenty first century’, that he expects that *in the future* revealed preference will make great progress as a way to estimate utility. Right. They’ve been trying to estimate it for sixty years, now! Nothing. What a failure.

    Which leaves us with the aggregation question. According to the Arrow Paradox, you can’t really aggregate different individual utility curves into a macro one which has all the neat aspects of the textbook individual curves. Not even by assuming that there is only one ‘representative’ consumer.

    • January 6, 2014 at 3:16 pm

      To be honest, this is depressing and insulting. You obviously know nothing of the industry examining mainstream economics’ connection with the brain, and given a reference to a survey in the field, you instead choose to try to offer me a trivial quote from wikipedia. This is not worthwhile debate. Frankly, your other contributions fall in the same category. There are plenty of legitimate challenges to mainstream macro, but you don’t mount one. Alghough your hear is in the right place, you know almost nothing of what you try to criticise. But, I repeat my main point again: nothing you have said is a critique of the methodology of microfounding; you are simply trying (but in a very scattergun fashion) to critique some of its conclusions (which you have not even properly digested, nor can therefore even remember or cite properly). It’s strange therefore that you chose at the outset to pit yourself as against me and on the side of Krugman and Smith. You probably didn’t read or understand what it was they were rooting for. Proper microfoundations (Smith), and pragmatic microfoundations (Krugman and Smith). This embraces pretty much everyone in modern mainstream macro. Your post would be better re-written as ‘I see Yates getting beaten up over the issue of microfoundations by Krugman and Smith, but frankly I think they are all talking rubbish’.

  6. Adrian Toll
    January 6, 2014 at 4:55 pm

    Here’s a link to that Wikipedia article in case anyone’s interested, which is a travesty in relation to current neuroscientific research: https://en.wikipedia.org/wiki/Left_brain_interpreter

    The whole left brain / right brain distinction has been pretty comprehensively debunked by neuroscience. For just one example see http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0071275 which was also summarised here: http://www.theguardian.com/commentisfree/2013/nov/16/left-right-brain-distinction-myth

    Quite apart from shabby Wikipedia articles, it seems rather ridiculous to suggest that there’s some kind of large-scale hemispheric differences when the brain is so computationally complex with massive amounts of parallel processing going on at any one time. The shame of it is that the evidence has been there for a very long time. The classic myth that the left is for language, to take just one example, is very easy to slough off when you read neuropsychological studies of language problems after brain damage, which show that you’re just as likely to have significant language impairments if you have damage to the right hemisphere as you are if you have damage to the left (always involving somewhere around the ventral occipitotemporal cortex). I could go on (and on).

    If that’s as far as your understanding of a link between neuroscience and economics goes, I’d recommend reading a bit further than Wikipedia (and you really don’t have to go that far). This is a good primer, despite the rather gaudy cover: http://www.amazon.co.uk/Neuroeconomics-Decision-Paul-W-Glimcher/dp/0123741769 You’d be much more interested in brain regions such as the orbitofrontal cortex and dorsolateral prefrontal cortex in both hemispheres, and you’d hopefully be much more wary of using neuroscientific hand-waving: http://blogs.discovermagazine.com/neuroskeptic/2013/11/10/handwavers-guide-brain/

  7. merijnknibbe
    January 7, 2014 at 8:22 am

    #10

    Dear Tony,

    Micro-founded models are not an abstract map of a country like a map of the highways of France. The highway map does bring me,together with some tacit knowledge,to ‘Le moulin du rock tombé’, my favorite hangout in southern France, near Sint Ambroix. They are Fantasia maps, which bring us nowhere. Aside – as such they are a better descriprtion of seventeenth and eighteenth century Frisian city and village dwellers (of whom I have extensive data) than of a modern society (which in those days already included Holland and Amsterdam and, financially, Friesland too, in fact)! Yes, the proverbial Adam Smith society where companies with over four employees were considered to be large! To show this, it’s getting time for a little close reading. One point we both probably agree upon is that one of the most important micro founded (not!) models around is the New Area Wide Model of the ECB. How does this model look at households? A quote, capital letters added (without formula, does not work in this blog)

    “(A) There is a continuum of households indexed by h Є [ 0, 1 ], (B) the instantaneous utility of which depends on the level of consumption as well as (C) hours worked. (D) Each household accumulates physical capital, the services of (E) which it rents out to firms, and (F) buys and sells domestic government bonds as well as internationally traded foreign bonds. This enables households to smooth their consumption profile in response to shocks. (G1 The households supply differentiated labour services to firms and act as wage setters in monopolistically competitive markets. As a consequence, (G2) household is committed to supply sufficient labour services to satisfy firms’ labour demand.

    Preferences and Constraints
    (H)Each household h maximises its lifetime utility in a given period t by choosing purchases
    of the consumption good, Ch,t, purchases of the investment good, Ih,t, which determines
    next period’s physical capital stock, Kh,t+1, the intensity with which the existing capital
    stock is utilised in production, uh,t, and next period’s (net) holdings of domestic government
    bonds and internationally traded foreign bonds, Bh,t+1 and B∗ h,t+1, respectively, given the following lifetime utility function:”

    Wow.

    (A1) only works when all (German, French,…) households are the same, which when looking at the quite different demographics of these countries is far fetched.
    (A2) No differences between rich and poor, capital owners and non-capital owners, young and old, employed and unemployed n(see below), Greek and German households, house owners and house renters…
    (B) I fully agree with Samuelson (see his Nobel lecture): when utility is not independently estimated the concept is tautological (“people do what they do”, to quote Samuelson).
    (C) Do people really only dislike work? Or does our job, including the respect (or not) and the hierarchical and social relations which come with it, also define us, for better or worse (in your terminology: shift the utility curve)?
    (D) Wow, again. companies and entrepreneurs do not invest, households do! A neat way to assume ‘animal spirits’ of entrepreneurs away!
    (E) Our most important capital goods, financially, are houses. It’s not clear how these are treated in the model (again, this is by far the most important single type of investment good!). Are they a consumer durable and, i.e., in the model represented by the single consumer good (see below)
    (F) In the model, this is the capital market which enabled households to smooth consumption. In reality, households borrow *new* money from MFI’s. The flow-of-funds (mentioned in the original post) show us how devastating that was!
    (G) “there is no unemployment”. Unemployment is defined away. As there is only one consumer good and as there are no technological differences between countries in the model, this kind of thinking led and leads people to believe that the only difference between Spain and Germany is labour costs and a decline of labour costs is supposed to solve all current account disequilibria (I’m stepping outside the borders of the model here, current accounts are defined away in the model as they are supposed to be part of the consumption smoothing of households). The input output models however show us that not all goods are equal production technologies, while we have to think in global production chains! http://www.voxeu.org/article/new-world-input-output-database
    (H) lifetime utility… please, again, write something intelligent about this concept or stop using it.

    Using such a model which assumes away all important things (unemployment, current accounts, financial sector, differences between households, differences between countries, differences between consumer goods (tourism is not the same thing as a Mercedes Benz) hasmightely contributed to the present Eurozone mess (there are more models in this vain which were used to design the present flaws of the Eurozone!) A better model, based upon input-output kind of thinking, can be found here http://www.levyinstitute.org/pubs/sevenproc.pdf

    yours,

    Merijn

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.