Economia critiques the archaic theories and practices through which economies are managed, or mismanaged, and develops a new vision that grows out of modern systems concepts and the idea that economies should support the health of society and the overall well being of people in all our rich complexity.
Geoff Davies’ project is distinguished by such commonsense, hard science, practicality, surprise, fine writing and expert contempt for orthodox economics, it’s a joy to read for visionaries and sceptics alike.
– Hugh Stretton, social scientist and author.
Economia, 2004 (No longer in print; see download option below.)
Download a pdf file of Economia
Is the Neoclassical Theory Scientific?
[This is Chapter 6 of Economia.]
We have seen that many economists have come to regard neoclassical economics as an objective and rigorous discipline, and evidently many imagine this to imply that it is a science. However this betrays a basic misunderstanding of what science is. For many non-economists this is not a serious issue. There are many jokes about economists’ inability to predict the course of the economy and about their disagreement about correct policies. Economists are widely viewed as no better than weather forecasters, perhaps much worse, and their pretensions to being rigorous and scientific are not taken seriously by much of the population.
However economists have succeeded in becoming dominant influences on public policy, and they have done this substantially on the basis of their claim to be experts. As well, the academic discipline of economics is deeply entrenched within universities, where its mathematical sophistication is flaunted as evidence of its merit. Finally, many people have trouble knowing how to judge claims to authoritative knowledge, and much of their confusion arises from widespread misconceptions that science is about truth and certainty. I will therefore devote some of this chapter to looking at the process of science and the status of its accepted theories
Before proceeding, I want to acknowledge that many people recoil from the idea that human affairs might be dissected by scientists. A common image is that science is about dry and dead mechanisms, and that if we submit to a scientific picture of society we will be giving up our humanity or our connection with divinity, by being reduced to predictable automatons. Some scientists have also made some quite misplaced claims about science revealing “truth” or the mind of God. Most science during the past several centuries has been of the reductionist kind, in other words itreduces systems to simpler components which can be more easily understood. Unfortunately this approach is not capable of encompassing the essence of living beings. It is therefore quite appropriate to hold reservations about the applicability to people of this kind of science and some of its grander claims.
However in the Introduction I introduced complex self-organising systems, whose properties are more than a simple sum of their parts and which display intriguing parallels with the living world and with human societies. The promise of complex systems is that they may lead us to better understanding of ourselves and our societies in general terms, but without intruding on our individuality and spontaneity. They would leave intact the sense of mystery, delight and adventure of the living world. They are also requiring scientists to adopt a more appropriate humility, a humility that we scientists might better have manifested long since.
Stages in the Process of Science – Irrational and Rational
The process of science has several steps that can be grouped into two main stages. The first stage is a process of perceiving a pattern in the observed world and formulating a description of that pattern as a hypothesis. The second stage involves deducing what the consequences of the hypothesis would be, and comparing those consequences with other observations of the real world. If the deduced consequences match the new observations, we say the hypothesis is supported, and we might dignify it by calling it a theory. We come to regard it as a good theory if we repeat this process and find that its deduced consequences match many observations of the real world to within a useful accuracy.
Let us take a very familiar and perhaps seemingly trivial example to illustrate this process, with apologies to Ptolemy. We may notice that about every 24 hours the sun rises in the east, moves high across the sky and sets in the west. In other words, we notice a pattern in the way the sun appears to move. We might then formulate a hypothesis to describe or encapsulate the pattern. For example, we might hypothesise that the sun moves steadily around the earth once every 24 hours on a circular path that carries it sometimes above the horizon and sometimes below it. From this hypothesis we can deduce that the sun should rise again in the east, about 24 hours after the last time we saw it rise. We can wait and see if this happens. If it does, we can consider that our hypothesis is supported by an observation, and that it seems to be a good hypothesis. If the sun continues to rise about once every 24 hours, we might conclude that our hypothesis is supported by observations of the real world, and that we will call it a theory. Although it may seem trivial, this example allows several important things about the scientific process to be drawn out.
The first stage of the process is the perception of a pattern in the world and the description or formulation of this as a hypothesis. This process of perception and formulation is often called induction. However despite this formal-sounding name, the perception process is not a rational process. It is a process of cognition that is deeply wired into our brains and has nothing to do with logic. Our clever brains are very good at perceiving regularities or patterns in the world, such as similarities in the shapes of animals, trees or faces, or regular events like days and seasons or musical rhythms. Indeed, we often perceive different patterns in the same observations. For example we can see faces in clouds, and there are clever visual puzzles that have been constructed to look first like one thing (a young woman’s face) then like another (an old woman’s face). In such cases our brains are receiving the same signals from the world, but we can make different stories from them, depending on how we “look” at them.
The intrinsic ambiguity of perception means that we have to be very careful about claiming the pattern we perceive to be the “true reality”. This is not an obscure point of philosophical debate or of optical illusions, it is of central importance in science. For example, Einstein’s theory of gravity is not a modification or extension of Newton’s theory, it is based on quite different perceptions. Einstein abandoned Newton’s idea of force acting at a distance and replaced it with the idea of local variations in the rules of geometry. These are entirely different and incompatible conceptions of how the universe works, though they yield similar predictions in many circumstances. So how can we say that Newton’s conception was true, or false, and can we have any confidence that Einstein’s conception will not be replaced? Where, then, is the “truth”
The second stage of the scientific process is the deduction of consequences of the hypothesis, and the comparison of those deduced consequences with more observations. This deduction and comparison stage is often called the empirical testing or just the testing of the hypothesis. In contrast to the induction stage, the deduction stage is logical and rational. Deducing the consequences of our hypothesis about the sun’s motion is rather trivial and doesn’t require any sophisticated logical tools, but it is nevertheless a strictly logical process
It is from the deductive stage that science has gained the reputation for being rational. It is also from this stage that much of its reputation for being impenetrable derives, since very elaborate logical methods are often used. Deducing the time of next appearance of our orbiting sun is simple, but deducing the aggregate behaviour of a box full of bouncing atoms or a collection of simple economic agents is not simple: it requires the help of sophisticated mathematical tools. Mathematics comprises a vast collection of elaborate logical structures already worked out, which is why it is a very useful and frequently used tool in this stage of the scientific process.
There is another kind of irrationality associated with science, but it comes not from the idealised process I have just described, but from the fact that science is practiced by human beings. The most obvious manifestation of this is that scientists regularly become emotionally attached to pet theories. This is partly because of the culture in which science is currently practiced, in which financial support depends on being right reasonably often. However it is also often because of an egotistical need to win, to be right at the expense of rival theories and their advocates. There may also be a deeper and less reprehensible reason, namely that the patterns we perceive actually become engraved in our neuronal connections, as a habitual way of thinking, and it is then harder to induce us to perceive a pattern that is inconsistent with our usual habit. Whatever the detailed reasons, theories often remain current amongst scientists beyond the time when there is clear evidence that the limits of their usefulness have been revealed.
A Familiar Process
Science is really a very familiar process developed into a refined state. We use the scientific process everyday without even noticing. For example, we might say “It’s late and Mary’s not home from work yet, I hope she hasn’t had an accident”. We call her office and find that she’s there working late. We’re relieved that she’s safe. In this process, we have noticed a pattern, or in this case a deviation from a familiar pattern in Mary’s behaviour. We have then formulated a hypothesis, that Mary might have had an accident on her way home. A logical consequence of this hypothesis would be that she would not be in her office. We test this hypothesis by calling her work telephone. We find that she is there, contrary to the implication of the hypothesis. In this case we have found that further observation of the world is inconsistent with the hypothesis, so we discard it, with relief. This has been an application of the scientific method.
Robert Pirsig, in his multileveled book Zen and the Art of Motorcycle Maintenance, has given a good account of how he uses the scientific process in the course of diagnosing a problem with his motorcycle, which he treasures[i]. If you hear an unusual sound coming from your bike (perception), you wonder what might be causing it. From the nature of the sound, you suspect that it might be coming either from the engine or a wheel bearing (hypothesis). You might lean sideways or backward to hear how the sound changes (deduction and testing). If you still can’t tell, you might change gears, to test whether the sound changes with engine speed or wheel speed. If it changes with engine speed, you might then formulate a new hypothesis about which part of the engine the sound is coming from. And so on.
It is not the process of science that is unfamiliar to non-scientists: we use it all the time to solve little mysteries in our daily lives. Rather it is the specialised jargon and mathematical tools that are often used in its deductive stage that are impenetrable to most of us. It is also that science keeps producing unfamiliar perspectives that can be counter-intuitive and even disturbing. Thus science has taken us into quantum effects, dinosaur ages and molecular genetic codes, and scientists talk about Schrödinger’s equation, or the Cretaceous extinction, or DNA replication, and many people don’t know what they’re talking about. Scientists use differential calculus, variational theory, topology, computers, and so on, to help them to deduce consequences of their hypotheses. But this is only the jargon getting in the way. The process of science is the same process you might use to figure out why the cat hasn’t come home, or why the car is making a strange noise, or any of the myriad little mysteries that life confronts us with.
Some Common Misconceptions
The are some other common misconceptions about science that we can clarify with the picture we have developed here. For example, predictions and observations need not have great precision to be yield important scientific insights. It is important in my own field to know that if the earth were molten when it formed, it would probably have frozen within 10,000 years, rather than within 10 million years. You can get an adequate estimate from a simple formula and a rough back-of-the-envelope calculation. I won’t explain any further here why this matters, the point is that estimates of the freezing time need not be more accurate than a factor of 100 for the information to be very useful.
So sometimes good science can be done without accurate numbers and fancy mathematics. If more social scientists understood this they might spend less time fruitlessly trying to emulate the quantitative precision of Newtonian physics. If more economists understood it they might spend less time on mathematical deductions and have more time to notice that they are severely neglecting both the induction and the empirical testing steps of science.
Another common misconception is that scientific progress is steady and inevitable. Science does not proceed systematically by cool logical steps from one discovery to the next. Science proceeds erratically. It depends on scientists noticing patterns or perceiving connections that nobody has noticed before, and this part of science is irrational and unpredictable. This means that an important idea can lie unnoticed while scientists are busy doing other things. Thus in my own field I can look back and wonder why they didn’t see in the 1940s that continents have been slowly drifting about the earth’s surface, since the idea had already been proposed and they had a lot of the important evidence and concepts available by then. It was not until the 1960s that the idea was seriously revived in the form of the plate tectonics hypothesis. On the other hand, sometimes a scientist who has an unusual combination of experiences sees connections that no-one else can see, because no-one else knows enough about all of the relevant things. Later, if the idea is revived or rediscovered, we say that the scientist was “ahead of his time”. Examples are Wegener’s proposal in 1912 that continents have moved about the earth’s surface, and Charles Babbage’s proposal in the nineteenth century to build a mechanical computing machine. Einstein’s theories of relativity were so radical at first that only a handful of people could begin to understand them, and it was quite a long time before good observational tests could be designed and performed.
The misleading notion that science is logical and inevitable has unfortunately been promoted by detective novels. Sherlock Holmes could formulate hypotheses and proceed systematically to search out clues within the carefully ordered fictional world created by his author, Arthur Conan Doyle, but the real world is much messier, and key information is often obscured or lost, as any real detective would undoubtedly assure us. Sometimes some key information turns up years later and allows a case to be solved. Detective work, like science, can proceed erratically too.
Scientific theories are not about “truth”. Scientists do not “prove” things, the way mathematicians do. Scientific theories cannot be proven to be “right”. These are common and profound misconceptions, but they are easily shown to be incorrect. The simple reason is that we never know when someone might pop up with a different and better theory. Thus Copernicus proposed that the earth rotates on its axis and the planets move around the sun, in contrast to our theory above that the sun moves around the earth. To casual naked eye observation, the two theories are equally acceptable. In other words they each can account adequately for the way the sun appears to move across the sky. So which theory is “right”? It turns out that accurate observations of how the planets appear to move are more accurately reproduced by Copernicus’ theory, or rather by Kepler’s refinement of it. However if we just want to predict the daily apparent motion of the sun, it is quite adequate to presume that the sun moves around the earth, though we will have to design a more elaborate path for the sun if we want also to predict the progress of the seasons. Apparently the builders of Stonehenge and many other ancient stone monuments formulated just such theories and found them to be useful. Similarly, if we just want to navigate about the earth’s surface, the old Greek picture of a celestial sphere with stars fixed upon it will serve quite adequately.
Scientific theories can be judged to be more useful or less useful, rather than right or wrong. Thus Kepler’s theory of planetary motion is more accurate and therefore more useful than Ptolemy’s. Newton’s theory of gravity serves just as well as Kepler’s theory for describing planetary motions, but it replaces Kepler’s three laws with Newton’s one, and it describes apples falling from trees as well, so it is more concise and it is useful in a broader context. Einstein’s general theory of relativity describes planetary motions more accurately than Newton’s theory of gravity, and it also helps to describe the structures of black holes and of the universe, so it is more generally useful again. However, for many familiar contexts Newton’s theory is entirely adequate. So is Newton’s theory wrong?
In mathematics, we prove a theorem by showing that it follows by strict logic from some postulates. We disprove a theorem by showing that it does not follow by strict logic from some postulates. There is a well-known philosophy of science, advocated chiefly by Karl Popper, that says that although scientific theories cannot be proven, they can be disproven. By this is meant partly what I have said above, namely that we can’t say a theory is right because there might be another theory that serves as well or better, but on the other hand that a theory is “wrong” if it is inconsistent with any observations. The trouble with this conception of science is that it leads us to say that Newton’s theory of gravity is disproven because the observed motion of Mercury is not consistent with it. This does not seem to be a useful attitude, because Newton’s theory is still extremely useful in many other contexts. I think the problem is a semantic one that can be avoided by confining the term “proof” to logic and mathematics, and by saying simply that a scientific theory is useful or not useful in a particular context.
A theory is useful if it can be used to make predictions to a useful level of accuracy. The interpretation of “useful” depends on the context. Thus current theories of the variation of the weather are useful over a period of a couple of days, but not useful over a period of weeks. Ptolemy’s theory that the sun and the planets revolve around the earth is useful for predicting the cycle of the seasons, but not useful for navigating to Mars. Newton’s theory of gravity is useful for understanding the motions of most of the solar system, even though Einstein’s theory predicts the motion of Mercury to a measurably better accuracy.
It is also important to bear in mind that we can never observe the world with infinite precision, and often we can’t measure all of the things that we might like to measure. The truth of this is forcefully apparent in my own studies of the earth’s interior, which have to rely on indirect observations. Thus observations always have uncertainties and are often incomplete. This is why a theory can never be shown to be “correct”, in the sense of describing the world with complete accuracy.
To summarise, science is not a fully rational process. There is an important component of irrationality and creativity. The formulation of a hypothesis is a non-rational and creative act, whereas the deduction and testing of it are rational. The rational deduction stage may involve elaborate mathematics or computing, or it may be so simple as to be obvious. Observational tests may be extremely precise or very rough, so highly precise numbers are not always essential to good science. Theories may be judged to be useful in a particular context if they permit predictions that match reality to a useful level of accuracy. More than one theory may serve in a particular context (for example, Newton’s and Einstein’s), so it is inappropriate to say that one theory is “true” and another is “false”. Observations always have inaccuracies at some level and they are often incomplete so we can never decide that a theory matches reality exactly. Again, it is not appropriate to claim that a theory is “true” or “proven”.
Status of the Neoclassical Theory
Now let us return to the neoclassical theory of economics, and to the attitudes of some of its proponents. We can certainly regard neoclassical economics as a scientific hypothesis, and even as a bold and imaginative one for its time. In previous chapters we have pursued this view by comparing neoclassical assumptions and predictions with observations. The goal was to decide if neoclassical economics is a useful theory.
The quotations from Walras, Jevons, Debreu and others in Chapter 3 reveal that they had a confusion between mathematics and science. They overemphasised the deductive and mathematical part of the scientific process. They lacked an appreciation of the subtleties of the inductive phase and they had a disdain for empirical testing of the theory’s predictions.
Walras and Jevons seemed to think it is a straightforward and rather trivial matter to abstract idealised concepts from experience, or in other words to formulate hypotheses. Their conception was that certain truths about the real world are self-evident, rather as Euclid’s geometric postulates were held for a very long time to be self-evident. However alternative possible geometries were conceived in the nineteenth century, and Einstein then showed that Riemann’s geometry corresponds better with the larger universe than Euclid’s geometry. The economists’ attitude betrays a lack of appreciation of the creative accomplishments of Copernicus, Kepler, and their hero Newton, whose hypotheses about the nature of the solar system required enormous leaps of imagination. Kepler, especially, spent many years trying a wild variety of different hypotheses before he arrived at his famous “laws” of planetary motion.
The economists’ disdain for testing their conclusions against experience does a gross disservice to Tycho Brahe and others who spent lifetimes accumulating careful observations, and to Kepler, Newton and others who performed vast, tedious and technically difficult calculations in order to compare their predictions with observations.
Walras and Jevons clearly did not properly appreciate the difference between mathematics and science. Neither, apparently, do many of their heirs, whose disdain for testing their deductions against experience were noted in Chapter 3, along with Margaret Thatcher’s presumption that monetarism has the same scientific status as the law of gravity. Toohey[ii] quotes the well-known economist Wassily Leontief as complaining that over 50% of the articles in the American Economic Review during the 1970s comprised mathematical models containing no data.
The Friedman Twist
The confusion over how hypotheses are abstracted from experience and how assumptions should be evaluated has continued to the present. According to Steve Keen[iii], a lot of this confusion is attributable to Milton Friedman, who argued in a well-known paper in 1953 that a theory should not be judged by its assumptions, but only by its predictions. Actually Friedman went further, and argued that
Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more signifcant the theory, the more unrealistic the assumptions (in this sense). The reason is simple. A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained.[iv]
Paul Samuelson called this proposition “the F-twist”. If this paper set the standard for the economics discipline, then no wonder it is in utter confusion. There’s an important point buried in this statement, but the statement is so confused and self-contradictory that it’s easy to draw completely opposing conclusions from it. Let’s untangle the mess. You don’t need a degree in philosophy.
What Friedman was groping for is what scientists call a good first approximation. Yes, a lot of the world is messy, and you don’t try to leap in and account for every last detail and nuance of it in your first attempt at building a mathematical description of it. The art of doing this kind of science, part of the irrational first stage of the process that I described earlier, is to simplify, but to simplify in a way that captures a lot of the essence of the behaviour you have observed.
Our simple theory of the sun’s apparent motion is a useful first approximation to the sun’s daily motion across the sky, but it fails to describe changes that occur from season to season. To explain the progress of the seasons you need a more elaborate theory. For example, you might suppose that the sun follows a great circle on a celestial sphere, to which the stars are fixed. If the sun’s circle is inclined to its daily path in the correct way, and if the sun completes one circuit in one year, then you can explain the progress of the seasons. You now have a second approximation that is more accurate and therefore more useful than the first. The first approximation still explains the most obvious manifestation of the sun’s motion, namely the succession of darkness and light, so it is quite a useful approximation.
You could add more refinements to our theory. For example, you could make a third approximation to explain the motion of the planets by defining a great circle path around the celestial sphere for each of them. However you would then find that the apparent motion of each planet is more complicated than the sun’s annual circuit. A fourth approximation would then be to add smaller “epicycle” paths to the main path of each planet. This was the kind of elaborate theory that Ptolemy set down in detail, and it was quite sucessful for most purposes of the ancients. Many centuries later, Newton, building on the work of Copernicus and Kepler, made a quite different first approximation: that the planets, including the Earth, move around the sun under the action of an attractive force that varies inversely as the square of the distance from the sun.
These examples illustrate the art of doing scientific approximations. Our simple theory of the sun’s motion is indeed a useful first approximation, both because it accounts for a primary observation to a useful level of accuracy and because it’s a good starting point for a more elaborate and more accurate theory. On the other hand, Newton’s first approximation is much more useful because it accounts for the apparent motions of the sun and planets to considerable accuracy. There is no absolute criterion for judging a good first approximation, you just have to try it and see.
If Friedman had said the best theories are those capture the most of observed behaviour with the fewest assumptions, even though those assumptions are still clearly a simplification of reality, then that is a very useful lesson to draw. However much of the economics profession seems to have drawn the conclusion that assumptions don’t matter at all, so long as your logical deductions from the assumptions are rigorous. To go further and to suggest, as Friedman did, that the more unrealistic the assumptions the better the theory is quite misguided. To say that the assumptions underlying a really good theory are necessarily “wildly inaccurate” is just nonsense. Newton’s first approximation is not wildly inaccurate, and Newton’s theory is much more useful than Ptolemy’s, even though Ptolemy’s was refined well beyond a first approximation. Newton’s theory is better precisely because it “explains much by little”.
There is a more subtle trap here, and much of the economics profession seems to have fallen into it. Inappropriate assumptions can exclude whole classes of important behaviour. For example, the nearly-universal assumption in economics that the economy is near an optimal general equilibrium necessarily implies that if we make any significant changes to how we do things the economy will be less efficient and prices will rise. Such logic has been used to argue that to reduce emissions of gases that cause greenhouse warming will be horrendously expensive. Yet cleverer designs for houses, office buildings, factories, and so on, have been developed that reduce energy use for little or no extra cost[v]. This means of course that reducing greenhouse gas emissions need not be very expensive at all. Standard economic models exclude this possibility from the beginning, and so estimates based on such models are worthless. Yet such estimates have been siezed upon by successive U.S. Administrations to justify their inaction on reducing fossil fuel use.
The lesson here is that the predictions of a theory are entirely conditioned by the assumptions upon which it is built. Wildly inaccurate assumptions will get you a wildly inaccurate theory. Inappropriate assumptions may exclude important kinds of behaviour.
The North American Free Trade Agreement was justified on the basis of computer models that were built on some or all of the following assumptions: general equilibrium, full employment, no capital transfers across national borders and (believe it or not) equal wages in Mexico and the U.S[vi]. These assumptions certainly qualify as wildly inaccurate. My impression is that the perpetrators of such acts of sophistry have forgotten what assumptions underly their theorising, or if they remember then they have little conception of how the assumptions are conditioning the answers that come out of their models – I have seen quite a few examples of this in my own field. You don’t need a fancy computer model to figure out that if capital investment moves from the U.S. to Mexico, so will jobs, but this is not what the models predict – they can’t because that possibility is excluded from the models a priori. You will meet the NAFTA and energy-efficiency examples in more detail later in this book.
These absurdities would not matter so much if careful testing of predictions and assumptions were a more prominent part of the practice of mainstream economics. The nonsense would gradually be weeded out, as it is in real scientific disciplines. In a discipline that wields so much power and yet has such a traditional disdain for testing predictions and which is positively encouraged, intentionally or otherwise, to ignore the critical importance of assumptions, these deficiencies are disastrous.
Comparison with physics
It is instructive to compare the subsequent fates of the theories of statistical mechanics and neoclassical economics with which we opened Chapter 3. Physicists have long-since recognised that the simplest, hard-sphere model of atoms is useful only for gases, and even there it yields measurable discrepancies from the behaviour of real gases. It fails totally for liquids and solids, for which the complex interactions of quantum mechanics have to be included
Physicists have also recognised that equilibrium theories are only useful for nearly isolated systems that are close to a state of equilibrium, or in other words for systems that are changing only slowly and with very little energy exchange with surroundings. By the time neoclassical economics was being formulated, physicists had moved on to formulating the second law of thermodynamics, which has important implications for systems that are not close to an equilibrium. The second law states a limit on how much energy can be extracted from a non-equilibrium system (although it doesn’t allow us to calculate just how much energy will be extracted in a particular situation). The second law prohibits perpetual motion machines.
Within the past few decades, physicists have made major advances in understanding systems that have large energy fluxes through them and are therefore far from equilibrium. The much greater capabilities provided by computers have been central to exploring the behaviours of such systems. Such explorations have revealed the surprising and exciting phenomena of self-organising systems, complexity and chaos.
To summarise, physicists have recognised the limitations of the early theories of gases by comparing their predictions with the real world. The early versions are still useful, but only for particular situations and with limited accuracy. Where they have been able, physicists have developed more elaborate theories that apply in other important situations (such as to solids and liquids). A new realm of non-equilibrium theories, with potentially much broader application, has been opened up more recently with the aid of powerful computers.
In contrast, mainstream economic theories have remained at the equivalent of minor variations of the hard-sphere model of a gas. They remain firmly in the realm of equilibrium theories. They take no account of the second law of thermodynamics, and have a corresponding disregard for inputs of energy and materials and outputs of waste. Many neoclassical theorists are still in the mode of demonstrating mathematical theorems rather using the power of computers to explore beyond the small realm of their restrictive assumptions. Still less do they seriously compare their theories with reality. The mainstream economics of public policy is oblivious to the radically different possibilities of non-equilibrium theories. In spite of such deficiencies, neoclassical theory has been dominant in Western nations throughout the twentieth century. Even at times when government policies did not directly reflect its prescriptions, it has usually been the implicit yardstick of virtue against which other policy programs have been measured.
Blatant discrepancies between predictions and observations have gone unaddressed. Challenges to the fundamentals of the theory, showing how assumptions that are clearly more realistic will change central predictions of the theory, have been brushed off and their proponents marginalised or ignored. The clear impression is that practitioners have become far too enamoured with mathematical sophistication (of a certain restricted kind), and far too attached to the putative general equilibrium. The term “pseudo-science” is not used here as a mere epithet. It is the most accurate term I can think of to describe the status of the neoclassical theory.
In 1987 a conference at the Santa Fe Institute brought together a select group of prominent physicists and economists to discuss the implications of new theories of complexity and to cross-fertilise their disciplines. The physicists were awe-struck by the economists’ mathematical prowess but startled by their lack of reference to the real world.
“They were almost too good,” says one young physicist, who remembers shaking his head in disbelief. ”It seemed as though they were dazzling themselves with fancy mathematics, until they really couldn’t see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they often weren’t looking at what the models were for, and what they did, and whether the underlying assumptions were any good. In a lot of cases, what was required was just some common sense. Maybe if they all had lower IQs, they’d have been making some better models.”[vii]
The economists were startled in turn by the physicists’ casual attitude towards mathematics. If a rough back-of-the-envelope calculation would enable the physicists to compare their theory with observations, they might not worry about doing a fancier calculation. The goal in physics is not a theoretical structure that is as elaborate as possible, the goal is a theory that represents how the real world works to a useful level of approximation. Waldrop quotes the unconventional economist Brian Arthur:
“They kept pushing us and pushing us,” says Arthur. ”The physicists were shocked at the assumptions the economists were making – that the test was not a match against reality, but whether the assumptions were the common currency of the field. I can just see [Nobel physicist] Phil Anderson, laid back with a smile on his face, saying, ‘You guys really believe that?’”[viii]
Survey of the wreckage
As Sir John Hicks feared, the wreckage wrought by increasing returns, volatile preferences, social interactions, delayed and incomplete information, the unpredictable future, and so on, includes the greater part of the general equilibrium theory. Among the strewn pieces we can discern the claim that free markets promote efficiency, and with that claim goes the conventional argument for free trade. Lying over there is the quaint claim that the financial markets are rational and “efficient”. In places we have not explored you will find the basis for managing the “labour market”, the equivalence of capital (meaning money) and capital (meaning factories, the means of production), and the claim that returns on invested money are determined by a notional decreasing “marginal revenue product of capital”.
You will even find, lying in pieces on the ground, what Steve Keen calls the sacred totem of neoclassical economics: the “law” of supply and demand (Figure 1.4a). Floating above the wreckage you might notice a fuzzy, pulsating cloud of preference, while a shifting, undulating supply line snakes towards the ground, instead of rising steadfastly and majestically up into the sky (Figure 1.4b).
As protests against corporate globalisation have gained in intensity over the past few years, so have signs of frustration and concern started to appear among the defenders of the world order. One of those signs is the increasingly frequent accusation that the protesters have no alternative to offer. This is not true, and most of the protests, from Seattle on, have been preceeded by teach-ins and similar gatherings that provide a great deal of information on what is wrong and how to fix it. However this information is amost totally ignored by the mainstream media and the establishment. That’s a major reason why the problems developed in the first place, of course.
There’s also an irony in this complaint. It’s as if you say to someone “Stop beating your head against that brick wall”, and he shouts back angrily “But you haven’t told me what else I should do instead”.
Nevertheless the point can be well taken. This book will offer quite a few suggestions about how we can do things better, but the attention spans of the media and politicians apparently require that new policy advice should be given now, and should take no longer than a seven-second sound bite. In this spirit, I offer some interim policy advice, in the form of a simple three-step process.
Interim Policy Advice.
- Stop listening to conventional economists.
- Use your common sense.
- This is not a joke.
- [i] [Pirsig, 1975]
- [ii] [Toohey, 1994] 17
- [iii] [Keen, 2001] Chapter 7
- [iv] [Friedman, 1953/1984]
- [v] [von Weizsäcker, et al., 1997]
- [vi] [Stanford, 1993]
- [vii] [Waldrop, 1992] 140
- [viii] [Waldrop, 1992] 142