Home > Uncategorized > Weekend read – Has economics — really — become an empirical science?

Weekend read – Has economics — really — become an empirical science?

from Lars Syll

quote-the-purpose-of-studying-economics-is-not-to-acquire-a-set-of-ready-made-answers-to-economic-joan-robinson-60-41-70In Economics Rules (Oxford University Press, 2015), Dani Rodrik maintains that ‘imaginative empirical methods’ — such as game theoretical applications, natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In Rodrik’s view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’ Writes Rodrik:

Another way we can observe the transformation of the discipline is by looking at the new areas of research that have flourished in recent decades. Three of these are particularly noteworthy: behavioral economics, randomized controlled trials (RCTs), and institutions … They suggest that the view of economics as an insular, inbred discipline closed to the outside influences is more caricature than reality.

I beg to differ. When looked at carefully, there are in fact not that many real reasons to share  Rodrik’s optimism on this ’empirical turn’ in economics.

Field studies and experiments face the same basic problem as theoretical models — they are built on rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid the ‘confounding factors’, the less the conditions are reminiscent of the real ‘target system.’ You could of course discuss the field vs. experiments vs. theoretical models in terms of realism — but the nodal issue is not about that, but basically about how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. I have strong doubts about the generalizability of all three research strategies because the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity/stability/invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and was for a long time also a key reason why behavioural economists had trouble getting their research results published).

Assume that you have examined how the work performance of Chinese workers A is affected by B (‘treatment’). How can we extrapolate/generalize to new samples outside the original population (e.g. to the US)? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) are not really saying anything on that of the target system’s P'(A|B).

As I see it this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of mainstream neoclassical economists’ models/experiments/field studies.

By this, I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? The B ‘works’ in China but not in the US? Or that B ‘works’ in a backward agrarian society, but not in a post-modern service society? That B ‘worked’ in the field study conducted in the year 2008 but not in the year 2014? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) explaining. And then the population problem is more difficult to tackle.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led, not only Rodrik, but several other prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X have on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant ‘structural’ causal effect and ε an error term.

The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated'( X=1) may have causal effects equal to – 100 and those ‘not treated’ (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent.

I also think that most ‘randomistas’ really underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

Just as econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share Rodrik’s and others’ enthusiasm and optimism on the value of (quasi)natural experiments and all the statistical-econometric machinery that comes with it. Guess I’m still waiting for the export warrant…

Taking assumptions like utility maximization or market equilibrium as a matter of course leads to the ‘standing presumption in economics that, if an empirical statement is deduced from standard assumptions then that statement is reliable’ …

The ongoing importance of these assumptions is especially evident in those areas of economic research, where empirical results are challenging standard views on economic behaviour like experimental economics or behavioural finance … From the perspective of Model-Platonism, these research-areas are still framed by the ‘superior insights’ associated with early 20th century concepts, essentially because almost all of their results are framed in terms of rational individuals, who engage in optimizing behaviour and, thereby, attain equilibrium. For instance, the attitude to explain cooperation or fair behaviour in experiments by assuming an ‘inequality aversion’ integrated in (a fraction of) the subjects’ preferences is strictly in accordance with the assumption of rational individuals, a feature which the authors are keen to report …

So, while the mere emergence of research areas like experimental economics is sometimes deemed a clear sign for the advent of a new era … a closer look at these fields allows us to illustrate the enduring relevance of the Model-Platonism-topos and, thereby, shows the pervasion of these fields with a traditional neoclassical style of thought.

Jakob Kapeller

Re game theory, yours truly remembers when back in 1991, earning my first PhD with a dissertation on decision making and rationality in social choice theory and game theory, I concluded that

repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.

This, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed. Listening to what one of the world’s most renowned game theorists — Ariel Rubinstein — has to say on the rather limited applicability of game theory in this interview (emphasis added), I basically think he confirms my doubts about how well-founded is Rodrik’s ‘optimism:’

Is game theory useful in a concrete sense or not? … I believe that game theory is very interesting. I’ve spent a lot of my life thinking about it, but I don’t respect the claims that it has direct applications.

The analogy I sometimes give is from logic. Logic is a very interesting field in philosophy, or in mathematics. But I don’t think anybody has the illusion that logic helps people to be better performers in life. A good judge does not need to know logic. It may turn out to be useful – logic was useful in the development of the computer sciences, for example – but it’s not directly practical in the sense of helping you figure out how best to behave tomorrow, say in a debate with friends, or when analysing data that you get as a judge or a citizen or as a scientist …

Game theory is about a collection of fables. Are fables useful or not? In some sense, you can say that they are useful, because good fables can give you some new insight into the world and allow you to think about a situation differently. But fables are not useful in the sense of giving you advice about what to do tomorrow, or how to reach an agreement between the West and Iran. The same is true about game theory …

In general, I would say there were too many claims made by game theoreticians about its relevance. Every book of game theory starts with “Game theory is very relevant to everything that you can imagine, and probably many things that you can’t imagine.” In my opinion that’s just a marketing device …

So — contrary to Rodrik’s optimism — I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

  1. Romar Correa
    April 16, 2022 at 11:00 am

    With “laws and relations” in economics not being “atomistic and additive” and with social systems the “whole (being) more than the mechanical sum of the parts”, Lars Syll has voiced the basic precept of General Systems Theory (GST). Harking back to Professor Syll’s first doctoral dissertation, both “methodological individualism and ahistoricity” are overturned in GST.
    One characteristic of GST that is being developed today is the emergence of novelty. A building block is the distinction between knowledge and learning that goes back at least to Kenneth Arrow (Antonelli & David, 2015). His learning by doing is at the heart of the generation of new ideas. To it may be added learning by using, learning by interacting. The knowledge generated is inchoate and can only be transferred by positive intents. Recently, the role of the state in fostering entrepreneurship is being appreciated (Langlois, 2015). In a milieu that is institutionally poor for the purpose, governments can fill the gap with driven innovation trajectories. They can draw up schema for the implementation of labor-using technical change.
    The skepticism of Lars Syll about experiments is well taken because the acid test of kindness and trustworthiness is conducted in the crucible of history. A theory of motivation will not to be found in the familiar accounts (Gold, 2021). Different motivations respond to different incentives. Using monetary sticks and carrots when people are acting on non-self-regarding cues will be counter-productive. Financial incentives, therefore, often result in the opposite of the expected effects. For the same reason, the threat of punishments for reneging on commitments often increases the amount of untrustworthy behavior, the writing of financial penalties in contracts is as likely to increase the number of contracts being violated.

    References

    Antonelli, Cristiano & David, Paul, 2015, The Economic Properties of Information and Knowledge: An Introduction, Department of Economics and Statistics, Università Degli Studi Di Torina 37/15

    Gold, Natalie, 2021, How should be reconcile self-regarding and pro-social motivations? A renaissance of “Das Adam Smith problem”, Philosophy and Social Policy, 37 (1), 80-102

    Langlois, Richard N., 2015, Institutions for Getting Out of the Way? A comment on McCloskey, Department of Economics, University of Connecticut Working Paper 2015-12

  2. Dave Raithel
    April 16, 2022 at 2:09 pm

    “But I don’t think anybody has the illusion that logic helps people to be better performers in life.” I counter that people who will not accede to the meaning of the words All, Some, None and then accept a syllogism, and who will not accept natural deduction, are terrible people to be around; and typically, they are the cynical dishonest heaps of humanity that adore a criminal thug like trump or his buddy, Peeeyutin. I think an illogical judge winds up being a Clarence Thomas or a Coney Dog Bark Bark. Not a very good citation to enlist my sympathy for Lars’ epistemic nihilism.

    • Romar Correa
      April 16, 2022 at 3:48 pm

      The notion of “multiple selves” goes back to Adam Smith at least and was developed by Thomas Schelling. We have an illustration of ‘multiple logics’ at play. Another example is ‘fast-moving’ and ‘slow-moving’ variables, not just in connection with individual response times but also in terms of the agglomeration of individuals in historical processes as Peter Temin has shown. Finally, we have no shortage of papers distinguishing between Nash equilibrium and Kantian equilibrium. The dynamics of the capitalist economy, say, is given in the form of differential equations. Two possible maximands are on offer to capitalists and workers, say. In the familiar case, each maximizes own utility, given the utility of the other. In the Kantian case, also called the Berge case, each optimizes the other’s payoff, given her own payoff. Accordingly, there are two solutions. The NE could be a Trump-Putin outcome. The Berge-Kantian equilibrium would be a nice outcome.

  3. metaecongary
    April 16, 2022 at 8:05 pm

    Not sure much of the argument works for me, trained from the beginning in the very empirical, data sourced field of agricultural economics. And, sure, we were offered the self-interest only (single interest theory) framing of microeconomics in graduate school, the ideology of mainstream economics, in the attempt to brain wash us, but, for many (including me) it did not work. And, why didn’t it work? Well, we surveyed, did focus groups with, and just “hung-out” with real working farmers, and, that which mainstream microeconomics preaches, well, it is not reality, it is not data and science based. Real farmers are Humans, and do not behave that way, and, we can study real Humans using data, doing empirical, science based analysis. We can test null hypotheses, we can continue the search for reality — like a farmer once said to me, after a long and productive focus group session, “So, you want to find out turns my crank (it was about economic choice to conserve, or not, soil and water, empathy playing a role in resolving downstream polution)…” “Yes,” I said, “exactly.” And, that is empirical economics: What turns the crank of real people in real economic choice. Try my Dual Interest Theory (built on a strong foundation of empirical science, lots of real-world data and experimental lab experiments, all finding the same thing) in Metaeconomics, which in general is built on the empirical foundation of Behavioral and Neuroeconomics Science. Try it, you might like it (just Google Dual Interest Theory, for starters, and, then, look for Metaeconomics). An empirical and science-based economist, here…

    • Meta Capitalism
      April 23, 2022 at 5:50 am

      Metaeconomics (met′ə ē′kə näm′iks) n. 1. an economics that broadens rational economic choice to include the moral and ethical dimension within a shared other-interest, 2. an economic theory that sees human nature as primarily seeking an egoistic-hedonistic based self-interest which is tempered by the empathy-sympathy based other(ethics based, shared with others, yet internalized within ownself)-interest 3. an economic theory positing that individuals seek to maximize own-interest which involves maximizing a joint self&other-interest 4. an economics seeing the virtue of prudence at the core of the more primal self-interest, and the core of the other-interest reflecting the other virtues— temperance, justice, courage, faith, hope and love 5. an economics that sees the central role of liberty, freedom and independence of the individual, but yet seeing the equally essential human need for connection with common cause as represented in the other(shared with others)-interest 6. an economics seeing self-sacrifice in both domains of interest as essential to maximizing the own-interest. (https://www.metaeconomics.info/definition-of-metaeconomics)

      Thank you Gary. This is a goldmine. I believe the Japanese people (especially the younger generation) will find great value in your ideas.

      • metaecongary
        May 7, 2022 at 6:20 pm

        Thank you for posting the Metaeconomics Definition. Readers can also go look directly at the underlying dual interest theory which brings analytical capacity to that which represented in the definition, at the same website, https://www.metaeconomics.info/what-is-dual-interest-theory

  4. Ken Zimmerman
    May 2, 2022 at 2:09 am

    The answer to the question depends on the reality in which economists and reviewers exit and the extent to which those realities are compatible or at least not in conflict. Let’s consider the options offered in books about science. Game theory is the study of the ways in which interacting choices of actors produce outcomes with respect to the preferences (or utilities) of those agents, where the outcomes in question might have been intended by none of the actors. A meta or empirical game is a simplified model of a complex multi-agent interaction. In order to analyze complex multi-agent systems like poker, we do not consider all possible atomic actions but rather a set of relevant meta-strategies that are often played. All this game theoretical empiricism can’t remove the need for human judgment. Or, game theory can’t win or lose a poker game.

    The natural experiment is observations of an event or situation that allows for or seems to allow for the non-intentional random or seemingly random assignment of actors to different groups for the purpose of answering a particular question(s). Since such experiments cannot be planned, there actual usefulness is problematic at best. But none of this removes judgment as the essential element in natural experiments.

    Laboratory and field experiments are, respectively conducted within a social scientific lab setting and outside of lab settings. The first is often focused on small group research and has limited applicability outside settings similar to those in the laboratory. Better termed quasi experiments, and used extensively in the social sciences, in field experiments actors (or other sampling units) are randomly assigned to either treatment or control groups in order to test claims of causal relationships. Quasi-experiments occur when treatments are administered as-if randomly (e.g. U.S. Congressional districts where candidates win with slim-margins weather patterns, natural disasters, etc.).Random assignment helps establish the comparability of the treatment and control group, so that any differences between them that emerge after the treatment has been administered plausibly reflect the influence of the treatment rather than pre-existing differences between the groups. The distinguishing characteristics of field experiments are that they are conducted in real-world settings and often inconspicuously. This is in contrast to laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Field experiments have some contextual differences as well from naturally-occurring experiments. While naturally-occurring experiments rely on an external force (e.g. a government, nonprofit, etc.) controlling the randomization treatment assignment and implementation, field experiments require researchers to retain control over randomization and implementation. Experiments of every sort prompt the same question: how should they be assessed in terms of the applicability of their results outside the experiment.

    The issue is the extent to which such approaches allow even a small revelation of any reasonable part of any reality outside the experiment. This is a complex question and can never be answered fully or with finality. But overall, such approaches can in some situations help social scientists find their way toward how to gain more reliable snapshots of one or more realities.

    Andrew Pickering (sociologist and physicist) explains why none of this functions as depicted in books and papers about science.

    In the scientist’s account, experiment is seen as the supreme arbiter of theory. Experimental facts dictate which theories are to be accepted and which rejected. Experimental data on scaling, neutral currents and charmed particles, for example, dictated that the quark-gauge theory picture was to be preferred over alternative descriptions of the world. There are, though, two well-known and forceful philosophical objections to this view, each of which implies that experiment cannot oblige scientists to make a particular choice of theories. First, even if one were to accept that experiment produces unequivocal fact, it would remain the case that choice of a theory is underdetermined by any finite set of data. It is always possible to invent an unlimited set of theories, each one capable of explaining a given set of facts. Of course, many of these theories may seem implausible, but to speak of plausibility is to point to a role for scientific judgment: the relative plausibility of competing theories cannot be seen as residing in data which are equally well explained by all of them. Such judgments are intrinsic to theory choice, and clearly entail something more than a straightforward comparison of predictions with data. Furthermore, whilst one could in principle imagine that a given theory might be in perfect agreement with all of the relevant facts, historically this seems never to be the case. There are always misfits between theoretical predictions and contemporary experimental data. Again judgments are inevitable: which theories merit elaboration in the face of apparent empirical falsification, and which do not?

    The second objection to the scientist’s version is that the idea that experiment produces unequivocal fact is deeply problematic. At the heart of the scientist’s version is the image of experimental apparatus as a ‘closed’, perfectly well understood system. Just because the apparatus is closed in this sense, whatever data it produces must command universal assent; if everyone agrees upon how an experiment works and that it has been competently performed, there is no way in which its findings can be disputed. However, it appears that this is not an adequate image of actual experiments. They are better regarded as being performed upon ‘open’, imperfectly understood systems, and therefore experimental reports are fallible. This fallibility arises in two ways. First, scientists’ understanding of any experiment is dependent upon theories of how the apparatus performs, and if these theories change then so will the data produced. More far reaching than this, though, is the observation that experimental reports necessarily rest upon incomplete foundations. To give a relevant example, one can note that much of the effort which goes into the performance and interpretation of HEP [high energy physics] experiments is devoted to minimising ‘background’-physical processes which are uninteresting in themselves, but which can mimic the phenomenon under investigation. Experimenters do their best, of course, to eliminate all possible sources of background, but it is a commonplace of experimental science that this process has to stop somewhere if results are ever to be presented. Again a judgment is required, that enough has been done by the experimenters to make it probable that background effects cannot explain the reported signal, and such judgments can always, in principle, be called into question. The determined critic can always concoct some possible, if improbable, source of error which has not been ruled out by the experimenters.

    Missing from the scientist’s account, then, is any apparent reference to the judgments entailed in the production of scientific knowledge-judgments relating to the acceptability of experimental data as facts about natural phenomena, and judgments relating to the plausibility of theories. But this lack is only apparent. The scientist’s account avoids any explicit reference to judgments by retrospectively adjudicating upon their validity. By this I mean the following. Theoretical entities like quarks, and conceptualisations of natural phenomena like the weak neutral current, are in the first instance theoretical constructs: they appear as terms in theories elaborated by scientists. However, scientists typically make the realist identification of these constructs with the contents of nature, and then use this identification retrospectively to legitimate and make unproblematic existing scientific judgments. Thus, for example, the experiments which discovered the weak neutral current are now represented in the scientist’s account as closed systems just because the neutral current is seen to be real. Conversely, other observation reports which were once taken to imply the non-existence of the neutral current are now represented as being erroneous: clearly, if one accepts the reality of the neutral current, this must be the case. Similarly, by interpreting quarks and so on as real entities, the choice of quark models and gauge theories is made to seem unproblematic: if quarks really are the fundamental building blocks of the world, why should anyone want to explore alternative theories?

    Most scientists think of it as their purpose to explore the underlying structure of material reality, and it therefore seems quite reasonable for them to view their history in this way. But from the perspective of the historian the realist idiom is considerably less attractive. Its most serious shortcoming is that it is retrospective. One can only appeal to the reality of theoretical constructs to legitimate scientific judgments when one has already decided which constructs are real. And consensus over the reality of particular constructs is the outcome of a historical process. Thus, if one is interested in the nature of the process itself rather than in simply its conclusion, recourse to the reality of natural phenomena and theoretical entities is self defeating.

    How is one to escape from retrospection in analysing the history of science? To answer this question, it is useful to reformulate the objection to the scientist’s account in terms of the location of agency in science. In the scientist’s account, scientists do not appear as genuine agents. Scientists are represented rather as passive observers of nature: the facts of natural reality are revealed through experiment; the experimenter’s duty is simply to report what he sees; the theorist accepts such reports and supplies apparently unproblematic explanations of them. One gets little feeling that scientists actually do anything in their day-to-day practice. Inasmuch as agency appears anywhere in the scientist’s account it is ascribed to natural phenomena which, by manifesting themselves through the medium of experiment, somehow direct the evolution of science. Seen in this light, there is something odd about the scientist’s account. The attribution of agency to inanimate matter rather than to human actors is not a routinely acceptable notion. In this book, the view will be that agency belongs to actors not phenomena: scientists make their own history, they are not the passive mouthpieces of nature. This perspective has two advantages for the historian. First, while it may be the scientist’s job to discover the structure of nature, it is certainly not the historian’s. The historian deals in texts, which give him access not to natural reality but to the actions of scientists-scientific practice. The historian’s methods are appropriate to the exploration of what scientists were doing at a given time, but will never lead him to a quark or a neutral current. And, by paying attention to texts as indicators of contemporary scientific practice, the historian can escape from the retrospective idiom of the scientist. He can, in this way, attempt to understand the process of scientific development, and the judgments entailed in it, in contemporary rather than retrospective terms-but only, of course, if he distances himself from the realist identification of theoretical constructs with the contents of nature.

    This is where the mirror symmetry arises between the scientist’s account and that offered here. The scientist legitimates scientific judgments by reference to the state of nature; I attempt to understand them by reference to the cultural context in which they are made. I put scientific practice, which is accessible to the historian’s methods, at the centre of my account, rather than the putative but inaccessible reality of theoretical constructs. My goal is to interpret the historical development of particle physics, including the pattern of scientific judgments entailed in it, in terms of the dynamics of research practice. (CONSTRUCTING QUARKS A SOCIOLOGICAL HISTORY OF PARTICLE PHYSICS, 5-8)

  5. yoshinorishiozawa
    May 7, 2022 at 3:03 pm

    I support the objections by Andrew Pickering to a majority of natural scientists who believe there is a decisive experiment are supportable. Their philosophy of scientists is normally very naive.

    The insight obtained by Andrew Pickering is important for economics. There are no decisive experiments in economics. To cite a few, Leontief Paradox, anomalies for Heckscher-Ohlin-Vanek model (it turned out that the model has no predictive power), capital theory controversy (it was proved that aggregate production function has a logical incoherence), etc.

    This is the reason why criticizing the mainstream this and that models is ineffective. As I repeat many times here, it takes a theory to beat a theory. We must obtain a theory by which we can compete and go beyond neoclassical economics. Without such a theory, criticism is useless. Continuing such a behavior would show the incapacity of heterodox economists. Although there are many of insufficiency, mainstream economists believe they are right, because it is only mainstream economists who can produce any positive results. They believe their research program is progressive, whereas heterodox economics and economic methodology are preoccupied with useless criticism without having any prospectus for further development of economics.

    • metaecongary
      May 7, 2022 at 4:57 pm

      In terms of alternative theory to that of single interest theory (self-interest only, the Econ) in mainstream (micro)economics, dual interest theory beats single interest theory in every empirical test to date, ongoing for 3-decades. People have dual interest as represented in joint and non-separable self-interest & other (shared with the other, yet internalized to own-self)-interest. It is in Human biology, evolved with dual interest: Ego & empathy (both capacities in the Human brain, not just the Ego of the Econ); I & We; person & community; self & other; and, writ large, market & government. People are Humans not Econs. And, while ego-based self-interest is primal, empathy-based other(that which the other can go along with, ethics-based)-interest nudges and tempers it. Said tempering is essential (and needs self-command, self-control) to avoid the excesses of the primal self-interest. Dual interest theory is built on an empirical foundation pointing to the key role of empathy in tempering ego, which is essential to economic efficiency, stability in the political economy, and, yes, happiness. It is an alternative theory for all heterodox economists — for building a Real World Economics Theory — to consider, test, and start using. It works. The new book Metaeconomics: Tempering Excessive Greed gives an overview. Just Google Metaeconomics, Dual Interest Theory, Empathy Conservation — to find it.

    • metaecongary
      May 7, 2022 at 5:06 pm

      Find ways to access the Dual Interest Theory in Metaeconomics book at *https://tinyurl.com/yxagxtuf * . And, the null hypothesis that shared other-interest is not a force in tempering the more primal self-interest driven economic choice, if not rejected, keeps Single Interest Theory in Microeconomics in play. Rejecting the null, well, Dual Interest Theory beats Self-interest Theory. Testing of said null over the past 3-4 decades has resulted in rejecting the null in every empirical test, using survey data, focus groups, and experimental laboratory based experiments. Duel interest theory beats self-interest theory.

      Gary D Lynne, Professor Emeritus University of Nebraska – Lincoln Metaeconomics Book: *https://tinyurl.com/yxagxtuf * Website: https://www.metaeconomics.info Blog: https://www.metaeconomics.info/blog Facebook: *https://tinyurl.com/yxszc74v * Twitter: https://twitter.com/metaeconomics #metaeconomics

    • yoshinorishiozawa
      May 8, 2022 at 6:57 am

      The first sentence of my above post should be read as follows:

      I support the objections by Andrew Pickering to a majority of natural scientists who believe there is a decisive experiment.

  6. gerald holtham
    October 4, 2022 at 10:19 pm

    The relevant question about any attempt to understand phenomena is not can it give us the truth or prove that something is always incontestably so but how does it compare with other methods we might employ. Is it better or, if not better, does it nonetheless yield some insights different from those that other methods do? It is it difficult, as Yoshinorishi says, to beat a theory without an alternative theory. It also difficult to get anywhere by pointing out the inevitable fallibility of all methods of empirical research. It is more useful to compare one method with another and determine which least inappropriate in a particular case. Insist on the unattainable and you end up with nihilism.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.