Home > Uncategorized > The information conundrum

The information conundrum

from Peter Radford

Kolmogorov, I think, hit the nail on the head when he said:

“At each given moment there is only a fine layer between the ‘trivial’ and the impossible.  Mathematical discoveries are made in this layer.”

He might just as well have said that life itself is discovered in this layer.  But let’s not get ahead of ourselves.

A few days ago I tried to point our way towards an acceptance of a certain humility.  I used a couple of quotes, one from Durer and one from Soros, to provide us with insights form well-regarded figures separated by a good long period.  Obviously the need to recognize our inherent fallibility has a long history.  The problem is that we keep forgetting and frequently end up acting as if we had some form of ultimate knowledge.  I used the word ‘utopia’ to stand in as a proxy for this illusion.  I was not talking about Utopia the book.

I have made it a long habit not to engage in public with correspondents who critique what I write because my purpose is simply to begin conversations and allow correspondents to create their own conversation.  Most often, and most interestingly to me, they go in very different directions than the one I had on  mind when I wrote.  Such is the richness of our modern ability to discuss in this remote, electronic, fashion.  We learn, or at least I do.  And that is the envigoration that motivates me to continue.

Once in a while, however, someone will say something that requires my public reaction.  My little Durer and Soros missive is one such occasion.

I have three points of contention to respond to.  I will go in the order with which they appeared.

First, I was accused of never had read “Utopia”.  This is a highly personal attack .  It is offensive.  But par for the course.  Quite how anyone could have this knowledge I don’t know.  In any case, as I indicated above, I was not referring to the book.  I was using the colloquial word ‘utopia’ as a placeholder for any body of knowledge that is based on the illusion of perfect knowledge.  Modern economics falls very closely to being such a vision of utopia.  It excludes reality, creates an alternative world, builds theories relating to that alternative world, and then presumes the lessons learned can be imported back to reality with relevance or practical application.  They cannot.  Most economists admit this, yet they continue teaching as if it were true.  The difficulty they face, even after their admission, is that the real world is highly unlikely to contain such pockets of perfection.  And, given its growing complexity, it is becoming ever less likely we would know whether it did.

Perhaps we could truncate this discussion by simply arguing that any body of knowledge that presumes perfection is actually an ideology.  It is a belief system best thought of as a placeholder awaiting substitution by an updated view based on the fallibility that Soros mentions.

Unfortunately in our modern world we are heavily encumbered by such ideologies.  Our poor understanding of the limits of reason and the inscrutability that uncertainty brings us are too often forgotten in our search for answers to questions.   We end up being too confident and often too unaware of the unintended consequences of our actions.

Second, I was then attacked because I asked the question “What new theory is there? Really new?”  This, I thought, was clearly a question pointed at mid-twentieth century economics.  Since about that time economics has been fairly static, contenting itself with embellishment and deep dives into the logic of allocation  and so on.  It has covered over and sanitized the major alternatives that threaten its hegemony and settled into a staid middle/old age.  That doesn’t mean that there isn’t activity.  There is.  But the creation of entirely new avenues of thought are few and far between.

In this second personal attack I was accused, yet again, of not having read a certain text.  In this case Shannon’s theory of mathematical communication.  Again, quite how this accusation can be made in the context of a few remarks I make about fallibility and utopia, I am not sure.

The same correspondent then returns later in the conversation to launch another missile.  This time it is not just me that hasn’t read or understood Shannon, it is the entire profession of economics.  Apparently the obsession that economists have with ‘market forces’ really aggravates this correspondent as he launches the ultimate criticism: economics is stuck in the age of Newton.

Well, there are worse places to be.  Pre-Newtonian fog and superstition being one.

But there are glimmers of sense in this accusation.  Which will, I assure you, bring me back to Kolmogorov.

Economics was born back in a simpler agriculturally dominated era.  Its roots still betray those origins.  It has never been able to reconstruct itself as a body of thought concerning information and its application to raw materials and energy.  It has focused, not on production, but on transactions and the optimal way of organizing them.  To do this its has had to invent for itself a pseudo-psychology in order to get how the demand for goods and services originates, and then how they are purchased.  It is only with the recent advent of behavioral economics that the older strictures about consumer attitudes has been challenged.  And even then the new ideas have not been allowed to retire a re-write of the core of economics.

Focusing on transactions implies that economists have spent a lot of time talking about the information needed to make markets work.  The conversation has been endless.  The limitations on thought brought about by the dependence on equilibrium as an organizing metaphor have severely reduced any interest in information beyond that needed to allow rational agents to arrive at such an equilibrium.  This means that, even though they use the word ‘information’ all the time, few economists actually think about information much.  The list of economists who have discussed information is a veritable who’s who of the discipline.  The problem is that they tend to truncate their enquiry by going straight to the word ‘price’, and because they are only interested in transactions, that is deemed sufficient.  The burden on prices is thus immense.

The key point here though is that information is semantic: a price is a piece of information that has meaning.  It is carrying a message of importance.

It is not what Shannon meant by information.

I am writing this missive on my laptop which contains a large array of what I would call information: letters, emails, articles, and so on.  Those things have meaning to me.  That meaning is what we call information in our vernacular usage of the word.  Shannon has a very different view.  If, for some reason, all those objects were mixed up and their contents smeared about, I would, in my vernacular sense, have less information.  But to Shannon the amount of information has increased.  This is because his notion of information is associated with the effort required to communicate the state of a system.  All that mixed up stuff on my addled computer needs a great deal more effort to communicate, so, according to Shannon, there is more information.  It is little wonder that Shannon ended up in a similar place to Boltzmann when the latter defined entropy in the early 1900s.  Both were dealing with probabilities or choices within populations.  For Boltzmann entropy was the number of equivalent micro states within a system; for Shannon it was the effort required to specify a message, which, in turn, is a function of the number of alternatives messages that could possibly be transmitted.

So here we hit a conundrum.

Information is crucial to understanding economies.  I think that is indisputable.  I regard it as the fundamental ingredient of all economic activity.  But information in an economy is not about communication exclusively.  We are not in the business of studying communication.  We are in the business of studying the order humans create and distribute in the form of the products and services they invent, need, or desire.  Information, in an economy, is an inherently creative phenomenon.  It is redolent with meaning.

Shannon himself warned us about confusing his usage of the word with the everyday version.  He wrote: “‘Information’ here, although related to the everyday meaning of the word, should not be confused with it”.  His version, the communication version, is closely associated with uncertainty, low probabilities, the difficult of transmitting a message, and ultimately with entropy.  In economics we are concerned with what appears to be the exact opposite: we are searching for the certainty that a product will take a particular shape or have particular properties;  we want to avoid surprise by making production and transaction predictable and easily replicable; we want to make production and transaction simple; and we want to instill order into our environment so we can extract the sustenance and pleasure we seek from whatever we create.

Which gets me back to Kolmogorov.

He, like Shannon was grappling with the question of how much information was contained within a given object.  He suggested three angles of analysis: the probabilistic, the combinatorial, and the algorithmic.  Shannon had covered the first two.  It was the third that most caught Kolmogorov’s attention.  It ought to be the one of most interest to economists also.


Because Kolmogorov goes down the road of trying to describe the information contained, not simply in messages, but in substantial things.  Instead of trying to define information in the context of multiple alternatives states or a population of entities — in the manner of Shannon and Boltzmann — he tried to identify information within a single entity.  His insight was this: the length of the algorithm needed to replicate the entity defined its information content.  The shorter the algorithm the simpler the object.

And this, I think, has enormous value to economics.

Kolmogorov introduced the word ‘complexity’ to describe what he was trying to measure.  The more information, the more complexity.  The two go hand in hand.  An object that can be described by a short algorithm has little complexity.

And here we arrive at a useful tool for analyzing economies and economic entities.  Products and services can be thought of by their algorithmic content.  Simpler products are easier to produce, require less information, and are easily understood.  And so on.

With the explosion of the division of labor, and the associated explosion of information since the industrial era began, our economies and the products and services within them, have risen in complexity.  This rise renders theories rooted within a simpler world irrelevant in that part of the new economy populated by complex entities/objects.

This, I believe is why firms exist.  Using complexity drives the need for organization and management.  Producing the algorithm is a feature of modern economies.  Economics and its obsession with transactions presumes that whatever it is being traded is simple: the activity prior to exchange is trivial compared with the transacting itself.  This attitude instilled within economics an indifference and a lack of comprehension towards production that cripples its ability to engage a modern complex economy.  It sufficed in an agricultural or early industrial setting.  It is obsolete in a modern highly interconnected and complex setting.

This emphasis on algorithmic information content opens up a host of fruitful avenues to explore.  It allows us to anchor economics within the general sphere of information science.  It allows us to create measures of economic complexity.  And it allows us to understand the co-evolution of the complexity of the economy and the institutions, like firms, needed to execute economic activity possible.

Information remains a conundrum in economics, but it isn’t because we haven’t thought about Shannon.

  1. February 16, 2021 at 5:33 am

    Economics was born back in a simpler agriculturally dominated era. Its roots still betray those origins. It has never been able to reconstruct itself as a body of thought concerning information and its application to raw materials and energy. It has focused, not on production, but on transactions and the optimal way of organizing them. To do this its has had to invent for itself a pseudo-psychology in order to get how the demand for goods and services originates, and then how they are purchased. It is only with the recent advent of behavioral economics that the older strictures about consumer attitudes has been challenged. And even then the new ideas have not been allowed to retire a re-write of the core of economics.

    The last sentence of the above quotation is not precisely correct. See for example, Marc Lavoie’s book review of our book: Microfoundations of Evolutionary Economics, in in particular the paragraph in the middle of page 267 (that starts with “Shiozawa (2016) proivides” and the paragraph in the middle of page 268 (that starts with “Third chapter describes”).

    As for the role of information discussed in the next paragraphs that follow the above quotation, I must emphasize that it is not the information (in Shannon’s sense or in an ordinary sense of the word), but the material structure of complex interactions of outputs and inputs. This is also roughly explained in the Lavoie’s review.

    As for the necessity of a “humility”, I also hope that Peter Radford would read my comment posted here.

  2. February 16, 2021 at 9:46 am

    1. Information shall not be discussed alone, but to be accompanied by the counterpart concept “Instruction” innately in one’s brain that defines, identifies and processes information, otherwise it would continue to be a “conundrum”. 2. Further, “information” shall be complemented or replaced by the concept “knowledge” that is resulted from information processing, most of the things in the laptop computer are knowledges than “information” except that the knowledges are called “information” when they are transmitted to other people in the same way by which original information is transmitted. 3. It shall be ridiculous in economics to discuss the question of “how much information is contained in an object”, because this is a topic of metaphysics, and it is impossible for us with bounded rationality to know it as a “bare truth”. To my understanding, Shannon’s theory is about how much information an object can be used to represent – by the digital method. This is entirely a computer science problem that does nothing with our socio-economic concern. 4. Algorithmically, we shall assume that the information we can obtain from any object must be infinite, and the knowledges we can obtain from any information or object must be infinite, because, as Kant argued, it was impossible to know the thing-in-itself. We just know how much we have known, and we do not know how much we have not known (“Simple objects contain few information” is only a tautology that says nothing). In my opinion, economics shall be developed under these assumptions, and, these assumptions can all be inferred with introduction of thinking time and “Instruction + information” structure. Thanks!

    • February 16, 2021 at 6:03 pm

      BinLi, I can rephrase my own conclusions about your 1, and 2 to show I agree with them, but at 3. you have gone wrong by missing the point. In his introduction to Shannon’s book, Weaver admitted that it didn’t seem to have much relevance to the real world, in order to show that, to a surprising extent, it did. Your 4. is equivalent to saying that there are an infinite number of things that numbers can be numbers of, but that does not follow from what Kant said, which was to the effect that we have to define our terms. Knowledge is not only what (as Locke put it) impresses itself on our senses, it includes our own definitions and our knowing them to be definitions.

      Given a Big Bang as defining an expanding universe within which energy exists and beyond its current limit does not, I cannot discuss this without two symbols to distinguish ‘existing’ and ‘not existing’, so that there are now four symbols, minimally requiring (in terms of Shannon’s logarithmic definition of information capacity) two ‘bits’ of information capacity to represent them. If these are numbers then (minimally) the numbers need to be Complex Numbers in which the two ‘bits’ are represented by two orthogonal dimensions. Defining an arabic number as an algorithm in honour of its inventor El Khorismi, I can model evolution in like manner (shifting to the next genus with the evolution of a fourth species) all the way up from a Big Bang to economics, and beyond that into fictitious capitalist money-making. Just as one needs to check there are no mistakes in any of the decimal arabic digits, so for each successive era one is able to examine the evidence for its four characteristic species.

      You argue that computers merely contain knowledge, but having worked with them since their infancy I can say you are mistaken. Shannon invented electric circuit logic in 1938, and since in the real world things go wrong, he invented error correcting logic in 1948. The logic for this is built into the hardware as ours is in our brains. Parallel, to the logic processing the data, is another line of logic checking everything for things going wrong, doing things like parity checks on numbers to objecting if one program overwrites the memory of another. In modern terms you not only have your ‘app’ but also the underlying operating system,ready to stop you when it detects an error. This in effect means there are four different types of knowledge: the raw data, how it is to be filed, how it is to be interpreted and acted upon (this not being the same for the data and its filing system), and the relationship between the input and out put, i.e what the program was intended to do and what it actually did. If the output was wrong that may involve feeding back error information and reprogramming, otherwise the results provide the starting point for the next round of processing. As I’ve told you before, the four levels are spelled out in the User’s Guide for Algol68, but the errors were detected as operating system ‘events’. Algol68 was called that because its minimally complex algorithmic style of programming reuses procedures like arabic numbers reuse numerals and shifts.

      You say this has nothing to do with our social concerns, but you argue literally (about what you can see), not analogically (about likenesses that can’t be seen unless you look for them). I’ll say simply that human brains have the same architecture as computers, so in Shannon’s dealing with mistakes is a very strong reminder that we too make mistakes and need to deal with them. The same can be said when the analogy offered is cybernetics (“steering” or navigation), where chaos can be induced by repeatedly changing course to avoid obstacles but not changing back, so the system loses track of its position. This is highly relevant to the sale of share derivatives. (There was a delightful story about road works in Bristol which ended up with a driver in a loop: his satnav directing him round and round the same set of diversions)!

      As Weaver said, Shannon is much more relevant than at first glance he appears to be. If messages are left garbled you may be unable to see their meaning and they will then not have desired effect. The problem with derivatives seems to have been caused by high flyers misinterpreting Shannon’s comment about (what are now) internet messages looking like random noise, not seeing (or perhaps not wanting to see) that the one is meaningful (given one knows how to decode it) and the other not.

      • February 18, 2021 at 7:48 am

        Dear Dave, I am very glad to see your response that I almost missed. The system notifies me everything except readers’ responses to my posts. That is amazing. Thank you very much for your long comments on my post. As to the point 3, I’d like to clarify that my criticism is only on such sentence as “how much information is contained in an object”, although as you say and as I agree on in principle, there are many similarities between a computer and the human brain. However, social science is different from computer science, and in my opinion, social scientists shall not argue in that way. Actors study the information from objects, and our social scientists study the actors, so, we need not to care “directly” for that issue, although it is generally true that simple objects generally contain few information, and the marginal information that is exploited from a certain object must keep diminishing. That is, we shall be “knowledge-neutral”. Since Kant insisted that the thing-in-itself is unreachable, information from any object must be primarily deemed infinite. It has to be so, no other choice. Additionally, Kant can be deemed to have hinted the existence of Instructions that transform information into knowledges, ideas, and so on. All of these assumptions shall be made without exception, according to Kantian philosophy. Therefore, what a computer contains are not only information, but also instructions and “knowledges”. Both the operating system and the Apps are “knowledges”. Any program can be a “knowledge”. Hence, knowledge, instead of information, shall be at the core of social discussions. Thanks a lot!

        If you suggest any literature for my “Algorithmic” study, please email it to me: libinw2011@163.com I also want to get your email address. Thanks!

  3. February 16, 2021 at 1:51 pm

    As I am the critic whose challenges Radford has taken objection to (apparently not seeing my taking objection to what amounts to HIS “speaking ill of he dead”), I have some reason to answer him by comparing what I actually said with what he said. There is, however, some history behind my apparent animus, though (as I’ve said) that was intended to challenge rather than to give offence.

    It is fairly evident from the way words pour out of him that Radford thinks in words, whereas I am an intuitive and have to painstakingly translate what I see into words: as a scientist attempting to make mine honest, concise and logically consistent. That seems to make what I say difficult for those who think in words. so after I had spent weeks trying to organise my thoughts as a contribution to a WEA conference, on discovering Radford was to review my paper I asked his advice – early on – about how I was presenting in it. In fact he said nothing, but simply didn’t recommend my paper. So much for giving all points of view (especially neglected ones) a hearing!

    Let me then compare Radford’s interpretation with what I actually wrote.

    Radford: “First, I was accused of never had read “Utopia”. This is a highly personal attack . It is offensive. But par for the course. Quite how anyone could have this knowledge I don’t know”.

    I wrote: “Radford has got a nerve, urging thought while regurgitating thoughtless Marxist abuse against a “Utopia” he has probably never read, or in any case never understood in its pre-Reformation context”.

    So I didn’t claim knowledge. and I INFERRED from what he was saying that he hadn’t read “Utopia”.

    Radford: “In this second personal attack I was accused, yet again, of not having read a certain text. In this case Shannon’s theory of mathematical communication. … it is not just me that hasn’t read or understood Shannon, it is the entire profession of economics”.

    I wrote: “I don’t suppose he’s read Shannon either, never mind where his feedback and theory of information have led me, despite my repeatedly pointing out their significance here”.

    Again this is not a “black and while” assertion, but “supposes” on the basis of the evidence he, other economists, interested writers and positivist scientists have presented. My mentioning feedback and information theory offered him clues if he was prepared to rise to their challenge. In rejecting them here, he has shown again that he hasn’t understood information science, conflating unmeasurable “meaning” with the information “capacity” and “content” which Shannon made measurable. As I said, he is not the only one who does this. esteemed writers Rifkin and Gleick do so too in their respective books on information. The gist of Shannon is that we have enough spare capacity to give ourselves time to correct our mistakes. The relevance of this to economics becomes evident when inadequate incomes demand correction: “you cannot get a quart out of a pint pot”. As Ruskin had already pointed out in the 1860’s, the rich have so much spare capacity that they can easily afford to provide adequate wages.

    Radford quotes Durer: ““But I shall let the little I have learnt go forth into the day in order that someone better than I may guess the truth, and in his work may prove and rebuke my error. At this I shall rejoice that I was yet the means whereby this truth has come to light.”

    I wrote: “Again, Radford affirmatively asks “What new theory is there? Really new”; then glosses over Durer answering “Very little, but see how one thing leads to another”.

    Here I’m responding to Radford claiming nothing is going to change, though the reality is significantly more complex than I’ve had Durer suggest. In the beginning there was no “thing” to lead to another: the “Big Bang” was a process rather than a “thing”. Almost all the “new” things are circulation concepts like using rotating clocks instead of terrestrial shadows. I said that what is new is that the Universe has changed, and we with it. We now see Earth going round the sun instead of the Sun going round the earth. Bacon’s new science first revealed that blood circulates, then electricity, then the complex (two-dimensional) mathematics of electromagnetism, then how to communicate information by varying [modulating] alternating [mathematically cyclic] power flows: hence telegraphy, telephones, television, remote control. These are to Information as worms, frogs, mammals and humans are to Life.

    When the economists at Santa Fe talk about complexity, they are talking about meanings being complicated rather than information capacity needing to be complex (as in complex number and map-making co-ordinates): to provide not just for “doing” but for Shannon’s (and before that Christian) “putting right”. Following Occam rather than Francis Bacon, they have interpreted the dimensions in the logistics equation as multiples of their one dimension. This is not as Pythagoras, trigonometrists and subsequent architects originally saw dimensions: as completely different directions represented by right angles. To see a star you need to look in the right direction. Derivative portfolios caused chaos by diverting attention to the night sky.

    Of course I get irritated with Radford when he denies the possibility of what has already become evident: in my scientific field if not in the rhetorical field that he has specialised in. I’ve started “at the beginning”, providing the clue that that is how we count in arabic numerals, starting from 0 and (adding to what I wrote about Durer) expanding into new dimensions as well as moving on from 1 to 2. If Peter hasn’t got the imagination to understand such analogies as clues, he should stop standing in the way of we who are more imaginative.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.