Home > Uncategorized > How to ensure that models serve society

How to ensure that models serve society

from Lars Syll

• Mind the assumptions — assess uncertainty and sensitivity.

• Mind the hubris — complexity can be the enemy of relevance.

• Mind the framing — match purpose and context.

• Mind the consequences — quantification may backfire.

• Mind the unknowns — acknowledge ignorance.

Andrea Saltelli, John Kay, Deborah Mayo, Philip B. Stark, et al.

Five principles I think modern times “the model is the message” economists would benefit much from pondering. And especially when it comes to the last principles, they would benefit enormously from reading.

More than a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find economics and statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight. 

The standard view in statistics and economics — and the axiomatic probability theory underlying it — is to a large extent based on the rather simplistic idea that “more is better.” But as Keynes argues — “more of the same” is not what is essential when making inductive inferences. It’s rather a question of “more but different.”

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w “irrelevant.” Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (“weight of argument”).

According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we “simply do not know.”

Science according to Keynes should help us penetrate “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. Getting hold of correlations between different “variables” is not enough. If our models cannot get at the causal structure that generated the data, they are not really “identified.”

How strange that writers of economics and statistics textbooks as a rule do not even touch upon these aspects of scientific methodology that seem so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes’s concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for quantities, one puts a blind eye to qualities and looks the other way – but Keynes’s ideas keep creeping out from under the carpet.

It’s high time that economics and statistics textbooks give Keynes his due.

  1. January 3, 2023 at 12:08 pm

    Dear Lars Syll,

    The People’s Global Resource Bank cryptocurrency Eco values natural resources, settles public debt, invests in ecosystem restoration and gives everyone a basic income for life. Please review https://grb.net

    GRB Eco Algorithm Designer John Pozzi

  2. Gerald Holtham
    January 3, 2023 at 12:42 pm

    Keynes sounds rather Bayesian in Lars’ account. “Degrees of belief” is like the prior probability that a Bayesian brings to her analysis and progressively modifies with exposure to the data to get an a posteriori probability. Bayesians, like Lars and Keynes, do not suppose that the probabilities exist in nature. They are our constructs in trying to make sense of nature. Think of a bookmaker quoting odds on a horse race. He has no philosophical justification for quoting quantitative odds on each horse. He does not think the race will be a random draw from a population of identical horse races. He gives numerical expression to his own view or guess of the different likelihoods. He has to do that to do business. Then he modifies his views rapidly in response to the betting market. The outcome of the race remains highly uncertain. Even more uncertain possibilities are assigned likelihoods by bookmakers and insurance companies.
    There are many things we cannot know and uncertainty is pervasive. The question in economics is, or should be, how do people deal with it. Economists are not soothsayers, claiming to foretell the future but they are supposed to have some idea as to how people will react to events.
    Economics that assumes uncertainty away (rational expectations etc) is generally irrelevant, misleading or worse, as Lars rightly says. But an economics that supposes people act on subjective probabilities that they assign to events is, in fact, quite realistic. That’s what people do in business. The less people are able to assign subjective probabilities the less inclined they are to act at all.
    As for Lars assertion that there is a standard view that “more is better”, that is a canard. So far as many of us are concerned, Karl Popper put “more is better” in the bin a long time ago. However often a general statement is corroborated it remains provisional. Induction is not a logical process at all, which Keynes never quite realised. It is purely psychological. Our degree of subjective confidence may rise as outcomes are repeated but there is no objective probability that increases. What Keynes saw through a glass darkly, Popper expressed clearly. The German language original of the “Logic of Scientific Discovery” was written in 1934.
    Lars persistently misunderstands the methodological basis for the use of statistics in social research, which is why he makes such statements.

    • January 3, 2023 at 4:04 pm

      Gerry, I think much of your argumentation here (and elsewhere on this issue) is founded on an equivocation between what we in science define as probability and what people ordinarily mean by things being more or less ‘probable.’ One typical example of this is your statement: “But an economics that supposes people act on subjective probabilities that they assign to events is, in fact, quite realistic. That’s what people do in business.”

      Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability-generating (nomological) machine or a well-constructed experimental arrangement or “chance set-up”.

      Just as there is no such thing as a “free lunch,” there is no such thing as a “free probability.” To at all be able to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

      Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any practical scientific value it has to be shown to coincide with (or at least converge to) real data-generating processes or structures – something seldom or never done!

      And this is the basic problem with economic data. If you have a fair roulette wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some excellent arguments if you want to persuade people into believing in the existence of socioeconomic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

  3. January 3, 2023 at 1:21 pm

    “But an economics that supposes people act on subjective probabilities that they assign to events is, in fact, quite realistic. That’s what people do in business.”

    Interesting. I did not think that bookmakers were particularly concerned with their own views on which horse may win, but rather with the potential punters’ views, and how they might set odds that would attract bets that together would make a sure-thing book. Similarly I am not sure that people selling ‘sustainable solutions’, for example, are always as concerned about actual sustainability as their potential customers’ perceptions.

    In many markets prices do tend to equilibrate around the current subjective view, which can leads to ‘bubbles’. I get that this is ‘realistic’, but not necessarily in a good way.

    “The less people are able to assign subjective probabilities the less inclined they are to act at all.” Contrary to ‘rational expectations’. (Although I would say that people become less inclined to be exposed to risks, which may mean disinvesting in a way that some would regard as ‘irrational’.)

    So this all points to this being an important topic.)

  4. László Kulin
    January 4, 2023 at 9:30 am

    I believe that there is a need for new strategies, alternatives and models. It’s required to do something. What is the good and what is the wrong way will be decided by the professional economists. It is good for new initiatives to appear in the media and at international conferences, because in this way citizens will learn that economists and other experts are doing hard work in this crisis. This is a difficult path that involves a lot of conflict. We must accept the disciples of Keynes.
    Wishing you a Happy New Year from Hungary:
    László Kulin
    social expert

  5. ghholtham
    January 4, 2023 at 6:33 pm

    I do not think your restricted definition of probability can ever be applied to the test of a scientific theory, in any field, not just economics. We never know enough about the “nomological machine” of reality to know what the “objective” probabilities are or whether they even exist. Our probabilities are always an expression of confidence, based on conventions. That does not mean they are illegitimate. The probabilities that people used to decide that the Higgs boson existed, were probabilities in my sense, I think, not yours.
    Moreover I agree that you cannot extract causality from statistical analysis of data. My objection is only to your assertion that that is what econometricians think they are doing. I plead not guilty.
    You can however test a causal theory or proposition on data. We may not agree on the nature of the “probabilities” that can then be assigned. When a theory is advanced it suffices to examine one data set to test its generality. The dataset used does not have to be a sample of anything as long as it falls within the supposed domain of the theoretical proposition. Only if you wish to claim that passing the test somehow validates the theory for the whole of its domain of application do you need to assert or assume that the dataset is an unbiased sample of the whole domain. Personally, I cannot think of situations in social studies where such a claim would be warranted. If the proposition passes the test you can go on believing it in the absence of evidence to the contrary but the test provides no additional warrant that it holds in other times and places.
    Ambiguity arises because all propositions in social science will be stochastic (not the place to explain why but I will if provoked). This means that one can seldom ask does such and such a parameter have exactly the value or the characteristics dictated by theory. Rather we ask is it near enough for us to say that it is consistent with the theory or, alternatively, decisively at variance with the theory.
    This is where our differences occur.
    If compounding variables have been successfully neutralized the residuals of the equation we fit to the data set will have the characteristics of white noise. If they do not, we can make only qualitative statements about whether the theory looks like a fit or not.
    The way we do that is to treat our dataset as a self-sufficient population and look at how parameter values vary across it from one observation or subset of observations to another. If this variation is large in relation to estimated values of a given parameter we may be unable to say much other than that the theory, right or wrong, is unlikely to have much predictive power in practice. If on the contrary the variations are small in relation to the parameter we can say whether it appears to fit the theoretically specified value.
    To go further, we need those white-noise errors. Because then we can infer via the central limit theorem that the residuals are indeed the result of a host of small random elements and we may be able to infer that the variations in parameter estimates across the sample are normally distributed. This enables us to provide a purely conventional quantification of our subjective confidence that the test has been passed or failed. Assigning the resulting “probabilities” to expected future values of the parameters in question requires additional assumptions: that the theory is indeed correct and the system under examination is broadly stable. No-one in their right mind claims these assumptions have been proved though they may or may not be empirically reasonable.
    Bayesians are even clearer that in most applications probabilities are no more than expressions of degrees of subjective confidence, which derive any objectivity from the fact they have been generated by conventional and repeatable processes.
    I am sorry to be so long in my reply. I am trying to be clear and isolate the point(s) of difference. I think you are supposing that people are claiming to do things that they are not claiming – and certainly should not claim – to be doing. I also think you have an exalted idea of the degree of assurance that any theory could have. Despite this we agree about the state of much contemporary economics.

    • January 6, 2023 at 3:54 pm

      A recurrent theme in your “defence” of the standard testing procedure in economics/econometrics is that it is, more or less, unproblematic: “The way we do that is to treat our dataset as a self-sufficient population and look at how parameter values vary across it from one observation or subset of observations to another.” The economic/econometric assumptions are presented as if they were always and everywhere possible to test. In my opinion, that is a vastly over-optimistic view.
      Data NEVER speaks for themselves. Why? Mostly because data (as are probabilities) are theory- and model-dependent! When we use them in econometrics they are nothing but matrices of numbers and relating those numbers to each other is only interesting if we convincingly can show that the relations represent some real-world economic phenomena/ processes that we want to understand and explain. Since the kind of empirical “testing” that goes on in econometrics is testing basically given the same model assumptions used to construct the models, it is NOT what we are looking for. At least not if we, as I would argue we should, set the true goal of testing to be the probing of congruence/compatibility of our theoretical models and the real target systems. The empirical testing pursued by econometricians, trying to get hold of “the data-generating process,” never really steps out of the model since the data they use to test their models are themselves essentially part of the models. The tests performed are only actual tests if we believe that the given real-world target is (more or less) nothing but a model analogue. I guess few of us think so. The way I see it is that as long as you cannot convincingly show that there is also a tight link between the model “tested” and the real-world target, the value of the “testing” is exceedingly small.

      • rsm
        January 6, 2023 at 7:30 pm

        Why shouldn’t the same reasoning apply to animal experimentation in science? Where is the discussion of ethics?

      • Gerald Holtham
        January 12, 2023 at 12:21 pm

        I am unable to answer your comment because I cannot relate it to anything I have written and I do not know what you are driving at.
        Thanks. I hope it led to some convergence in our positions though I am not sure it has.

  6. Gerald Holtham
    January 9, 2023 at 4:15 pm

    Let us not confuse different levels of analysis. To collect data on “income”, “consumption”, “exports” or “hours worked” you have to have a prior concept of each of those things. A broad conceptual framework is necessary for any data to be collected. It is then possible to use the data to discriminate among competing theories which seek to explain empirical variations in the concepts or categories. In doing so you are accepting the categories as things to be explained. If you are saying it is very hard to test a whole conceptual scheme from within the conceptual scheme, I can only agree. However, data currently collected, like standardised national accounts, embody very broad conceptions about the nature of economic activity; they do not tie us to any of the popular or less popular schools of thought about economics. And as preoccupations change, the data collected changes too – carbon emissions becoming part of economic data sets is a case in point.
    Econometrics can test the the comparative advantage theory of trade or the quantity theory of money. If you want to argue that all the categories used in most discussion of economics be it neo-classical or Marxist, are misconceived, then econometrics can’t help. You have to sketch an alternative conceptual scheme and criteria for deciding whether it is better or not before we can get further
    Is that what you meant by saying testing is using the same model assumptions to construct the model? If not what do you mean?
    Your final sentences include an inversion of the true situation and a puzzle.
    The real-world system that we are addressing is not an analogue of the model. The model purports to be a simplified or distilled analogue of the real world system. Why should we “believe” it is? In my world three reasons:
    1 It is a plausible hypothesis about causal factors which is compatible with what we think we know about the world and human nature.
    2. It is consistent with the data generated by the real-world target system
    3 We don’t have a better theory/explanation in terms of the first two criteria.
    You and I agree that 1 is neglected in economics under Friedman’s baleful influence. Why we should disagree on 2 still escapes me but apparently we do.
    The real puzzle for me though is I have to show a “tight link” between target system and model and “probe the congruence/compatibility..”
    What can that mean other than my criteria 1-3? What does empirical testing look like in your world? Can you give an example of someone doing it right?

  7. Gerald Holtham
    January 9, 2023 at 4:20 pm

    RSM Any testing requires levels of integrity in recording and reporting processes and results. Those apply generally. Animal testing raises other issues entirely. Even when you torture data, they don’t scream.

    • rsm
      January 12, 2023 at 4:13 am

      So suicides rising with GDP can be safely ignored because you can abstract away the screaming?

  8. January 9, 2023 at 7:12 pm

    Just a short comment on number 2 of your 3 points on why we should believe a model: “It is consistent with the data generated by the real-world target system.” That view is unsustainable for several reasons. One of them (something convincingly argued by Duhem and Quine) is that the empirical (econometric or not) tests of our models or theories are ALWAYS underdetermined. Consistency is only rarely enough to decide on which theory/model is the “best” since many competing theories/models can all be deemed consistent with the real-world target system. Testing the validity and soundness of economic hypotheses ALWAYS transcends questions of consistency. If we don’t acknowledge this, I would argue that (even given your points 1 and 3) testing is almost completely useless as an inference tool for deciding if a model, econometric or not, has been falsified or supported by the “tests” performed. As Renzo Orsi had it a couple of years ago: “if one judges the success of the discipline on the basis of its capability of eliminating invalid theories, econometrics has not been very successful” …

  9. Gerald Holtham
    January 10, 2023 at 11:44 am

    Consistency with data is not sufficient for establishing a theory.
    As you say many theories will be consistent with a finite data set. But it is surely necessary. No theory is any good that is not consistent with the data. My 2 is a necessary condition but needs 1 and 3 as well.
    Economics has not been very successful at eliminating valid theories, it is true. There are three reasons for that. One, in a complex system it is always possible to add conditions or plead extraneous influences to preserve a theory. Lakatos pointed out that this was a problem with falsification in more solid disciplines than economics; two, given the complexity of social systems you need a lot of data to truly resolve issues and data in economics tend to sparser and noisier than the ideal; three, economics is ideologically as well as scientifically contested and neo-classical economics has survived by resolutely ignoring empirical findings, including econometric results.
    But the fact remains we cannot do better than the best we can do. If we do not try and test propositions against empirical data, even if the tests are not always powerful or conclusive, how can we make progress?
    Lakatos has not stopped scientists trying to test hypotheses. Do you really want to stop people applying statistical tests to economic theories? And I repeat my question: what do you want them to do?How do we find your “tight links”?

  10. Gerald Holtham
    January 10, 2023 at 11:44 am

    sorry – for “valid” read “invalid”

  11. Gerald Holtham
    January 10, 2023 at 3:41 pm

    Can I make a point at a slight tangent to the discussion above? I am conscious of taking up a lot of space on this blog. My excuse is the issues are of some importance.
    Econometrics is not just used for testing theories. I am at one with Lars in being sure it cannot extract causation, merely (weakly) test theories of it. But it is also used in the mundane business of forecasting, without pretending to have a causal theory or have a true understanding of an underlying system. I do not believe in induction as a logical principle but like everyone else I can observe well-established empirical regularities and in the absence of any reason to think they will change I can use them to project the future to a limited horizon. This is not science; one is looking not for causes but for leading indicators. One’s confidence in projections is limited but the results are likely to be better than guesses not informed by a careful inspection of persistent relationships in the data. I don’t know why reality permits this sort of exercise but I know it often works because I have made money from doing it! It does not produce theory but anyone proposing a theory has to make that consistent with those observed regularities.
    However it is important to distinguish this use of econometrics from the testing of theories. The two are quite distinct. I daresay there have been practitioners who mixed up the two and made false claims. My plea to Lars then is to blame the carpenter not the saw.
    Lucas pointed out that forecasting models cannot be used with confidence to do policy simulations because they are not structural, i.e truly causal. He was right about that. Assuming that unexplained regularities will survive a change of policy regime is evidently perilous if not unwarranted. It is, however, somewhat less crazy than making up counter-factual choice-theoretic fairy stories about a non-existent representative agent, using the “results” to impose restrictions on a macroeconomic model and announcing you have produced a structural model and solved the problem! What makes that crazier is that the model restrictions, when themselves tested, are always rejected by the data.
    Newton despaired of understanding the madness of crowds. The madness of crowds of economists is equally challenging.

    • Meta Capitalism
      January 10, 2023 at 5:07 pm

      Gerry, first, I am glad to see you engaging dialogue with Lars. I really enjoy learning from both your comments and in exchanges like above nuances are important. Thanks to both Lars and you this kind of dialogue makes RWER worth reading. All the best.

  12. Gerald Holtham
    January 12, 2023 at 12:23 pm

    I cannot answer your comment because I don’t know what you are driving at. I cannot relate it to anything I have written.
    Thanks. I would hope it has led to some convergence in our positions though I am not at all sure it has.

    • Meta Capitalism
      January 13, 2023 at 7:24 pm

      … I think you need to manage your expectations better Gerald. Lars is solely focused on methodology and the philosophy that underlies certain forms of abuse of mathematics in the so-called “science” of economics. You seem to take his arguments and the evidence he gathers for them personally.

      As for myself, I don’t see your positions as that different when looked at in the totality of your own comments on RWER. I do find that you have been inconsistent in some regards to admitting the limitations but on the other hand making the false assumption that econometrics and statistical analysis are the only valid way to knowledge while labeling all others as mere storytelling (and you have wavered on this point itself at times, contradicting previous statements). Lars is at least consistent.

      Your focus seems to be the pragmatic and reasonable use of econometrics to the extent that these statistical techniques provide limited but useful information. There are limits to those techniques if the data is poor or non-existent, or worse, simply unobtainable (e.g., radical uncertainty). But there are other ways of obtaining useful and reliable knowledge other than econometrics or statistics in the social sciences (case studies, ethnographic studies, etc.), and these are not merely anecdotal or storytelling but do provide good evidence of causality. And more importantly, sometimes these are the only way to get at true causes that can be actually obscured and obfuscated by the false belief that only mathematical quantification can lead to knowledge or that the presupposition that the application of the methods of the natural sciences are the yardstick for the social sciences:

      Your presupposition is that the application of the methods of natural science is the yardstick for social science. This is scientism. (Robert Delorme, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics, WEA Conference.)

      You have articulated at times a sound argument for the pragmatism that recognizes the limitations of econometrics and statistical analysis, and I have found these comments most useful. I call them “the best of Holtham.” I continue to reflect back on them and learn new things to this day.

  13. rsm
    January 13, 2023 at 6:11 am

    Gerald Holthman said:

    《RSM Any testing requires levels of integrity in recording and reporting processes and results. Those apply generally. Animal testing raises other issues entirely. Even when you torture data, they don’t scream.》

    I cannot answer your comment because I don’t know what you are driving at. I cannot relate it to anything I have written.》

    If you use data to justify, say, animal experiments because they increase GDP and human welfare (which is regularly argued against my animal rights stance), why do you get to ignore the screams of the actual animals tested, and the rising human suicide and overdose rates, that accompany both of these? Are some of us more equal than others?

    • rsm
      January 13, 2023 at 6:12 am

      Holtham not -man

  14. Gerald Holtham
    January 14, 2023 at 4:01 pm

    To RSM: I have not justified animal testing or indeed taken any position at all with respect to it. We were talking about whether you can use statistics to test theories in economics. There were no propositions from myself or Lars Syll about whether GDP is a good measure of welfare or how animal and human rights are to be compared.
    Meta: I take nothing personally but I enjoy a good discussion and contesting error when I think I see it.
    Of course statistical methods are dependent on data quality and have their limitations, as I have been at pains to point out. Moreover I deny absolutely that I claim they are the only means “of obtaining useful and reliable knowledge”. In fact I have defended the use of concrete case or pilot studies against Lars’ claims that they too were not useful. He was the guy criticizing Esther DuFlo’s Nobel prize for “experimental economics”, not me.
    I am ecumenical when it comes to empirical methods – whatever works is ok by me. The central issue here, it seems to me, is that Lars criticizes economics for not being an empirical study – and I entirely agree with him. Only then, he goes on to criticize the methods used to try and give the study some empirical content. Not bad examples or poor practice, mark you, but the methods themselves even carefully applied. No to statistics and to pilot studies. So how do we make economics empirical? Lars is silent. You cannot tease an answer out of him.

    • January 14, 2023 at 4:28 pm

      I think you’re a little bit too sweeping in your remark about my evaluation of the kind of “empirical” work — especially on using RCTs — that Duflo et consortes perform. What I have said is that one cannot take for granted that RCTs give generalizable results since the fact that things (appear to) work somewhere is no warranty for us to believe them to work for us here or that they work generally. I have also questioned their underlying “interventionist” approach since it often means that instead of posing interesting questions on a social level, the focus is on individuals. Instead of asking about structural socio-economic factors behind, e.g., gender or racial discrimination, the focus is on the choices individuals make. Duflo et consortes want to give up on “big ideas” like political economy and institutional reform and instead go for solving more manageable problems “the way plumbers do.” But while a plumber is good at fixing minor leaks in your system, that is of no avail if the whole system is rotten. Then something more than good old fashion plumbing is needed. The big social and economic problems we face today are not going to be solved by plumbers performing interventions or manipulations in the form of RCTs.

    • Meta Capitalism
      January 16, 2023 at 12:07 am

      Understanding any historical episode requires attention to all the particularities of the situation – as historians do. The question of which of the factors at play represent more general tendencies and could recur is not trivial…. [Y]ou cannot analyse that data except statistically. Everything else is anecdote and literature. (Gerald, RWER, 2/8/2020, emphasis added)

      On the one hand I don’t see a lot of nuances in the comment above; some lip service to historical details and then a rather offhand dismissal verging on absolute unqualified “You cannot … except statistically.” But, on the other hand you have noted statistical studies need to be “confirmed by observation at micro level,” which sounds very similar to Spiegler’s “auxiliary work” in what he proposes in his reform proposal (chapter 7) of a new field within economics which already exists informally, but he would formalize and call “interpretative economics (Spiegler 2015, 165).” And a fuller throated example is here.

      Consistency counts in my view, but this is a blog and no doubt if you look long enoug you find some comment of my own that contains an inconsistency, so I will take you at your best my friend and assume then you don’t think case studies, participant observation, and evidence from other diverse sources is mere “anecdote and literature” and that evidence (data) comes in diverse forms, some more amenable to statistical analysis and some less so or not at all. And if evidence in other forms is valid then we can make use of it in our theory building and knowledge generation (I have seen you do it right here on RWER!).

  15. Meta Capitalism
    January 16, 2023 at 10:43 pm

    Sorry, meant to say, “and evidence from other diverse sources is _not_ mere “anecdote and literature” …”

  16. gerald holtham
    January 20, 2023 at 11:09 pm

    Lars, well you are right; your critique of pilot studies was more nuanced than my remarks implied. But I would make two points. Of course a case study cannot guarantee that the same conclusions will apply in other times and places, any more than a statistical test on one data set guarantees the same conclusion on a different one. The work nonetheless establishes something and may be informative, if not conclusive, about other situations. You write as if there is some better way to proceed but there isn’t. Secondly, if a study is tackling one question it is unreasonable to judge it for failing to answer an entirely different question. It is not at all clear that DuFlo wants to give up on big ideas of political economy just because she tackles question of how to relieve poverty under the existing system. You can’t answer all questions at once.
    There are some questions so broad and so entangled with values that it is difficult to address them empirically. Ideology inevitably intrudes.
    I don’t deny that. But when we can address more specific issues empirically we should do so. Even then, victories over error are usually achieved on points not by a knock-out. It’s unfortunate but c’est la vie. If you ask for too much certainty you lapse into nihilism.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: