This text has been translated by a machine and has not been reviewed by a human yet. Apologies for any errors or approximations – do not hesitate to send us a message if you spot some!
Economic activity can be represented in quantitative terms, whether in terms of income, wealth, environmental impact, or the material or monetary resources required to produce, transport and sell goods. It is therefore logical that part of economics (as a discipline), which seeks to represent the economy (as an activity), can, and indeed must, be based on numerical data, and attempts to highlight mathematical links between these data in order to explain or even anticipate their evolution.
Since the end of the 19th century, economists have made increasing use of mathematics.1 and mathematical models (see box below). As a result, economics is sometimes considered more “scientific” than the other humanities and social sciences.2. In this fact sheet, we present three areas in which mathematics is used in economics. We will see that this use can lead to abuses, for example, by taking for granted a result that depends on numerous hypotheses that have not been verified in practice, or by generalizing another result well beyond its field of validity. We’ll also look at a few cases where this recourse is useful and legitimate, as well as the precautions to be taken.
N.B. We do not address here the more fundamental question of the place of quantification in our societies.3. Nor do we mention here a specific use of financial mathematics, the invention of financial products, a subject whose importance should not be underestimated since, in this case, mathematics participates in the construction of reality and its interpretation by having a performative action.4. Finally, we have not mentioned the case of companies that use mathematics in their accounting and finance decisions.
What is a mathematical model?
A mathematical model consists of a set of mathematical equations and a method for solving them, the purpose of which is to predict the value of one or more variables of interest.5as a function of parameters6 set by the user and one or more input variables.
For example, the weather can be predicted by a mathematical model, whose equations are physical laws, whose parameters are a few well-chosen characteristics of the earth’s atmosphere, and whose input variables are the initial conditions (temperature, humidity, winds, etc.) characterizing the observed weather. The model’s output variables are temperature, humidity, cloud cover, etc. at a later date.
High school physics courses teach very simple models, such as the one used to calculate the arrival point of a ball weighing a certain weight and launched from a certain starting point at a certain speed. In economics, the teaching of “microeconomics” is based on numerous optimization models.7.
Helping to interpret the business world
As reality is not directly accessible to us, we use instruments and/or tools to try to give it form. These can be either material, such as the measuring instruments used in physics or biology, or conceptual, i.e. based on theory.
The economy has been manipulating numbers (particularly in private and public accounting and trade) for centuries (or even millennia, as the first uses of numbers were partly linked to trade and land ownership).8). Mathematics can provide measures of phenomena that are not immediately accessible to the senses through raw data, such as the extent and distribution of social inequalities (with indicators like the Gini or Palma ratios, which we explain in our fact sheet on measuring inequality ).
The concepts used in economics contribute to the interpretation and, in a way, the shaping of the economic world. Unemployment or underemployment,inflation or purchasing power are “constructed” notions, concepts that “make visible” phenomena that cannot be seen directly. They can also be quantified, and this quantification leads to a certain view of the state of society. It is, for example, not irrelevant to our perception of the world whether the unemployment rate is very low or, on the contrary, high (which, once again, can be assessed using figures).
The educational value of toy models
In economics, pedagogy, which aims to provide keys to interpretation, can be facilitated by the use of mathematics and, in particular, by the use of ” toy models”, a common practice in physics.
Let’s look at a few examples.
Consumer sensitivity to the price of goods purchased, i.e. the fact that consumption of a good will vary more or less according to its price, can be analyzed using the notion of elasticity, expressed by a mathematical equation. By using this notion and attempting to “calibrate” the equation (i.e., to determine the value of the various parameters as a function of observed purchasing behavior), we can draw conclusions about the effectiveness of an energy tax or the consequences of a price rise.
The rather counter-intuitive idea that an energy-saving policy can lead to increases in energy consumption can be understood thanks to the concept of the rebound effect, introduced by William Stanley Jevons at the end of the 19th century. This effect can be defined mathematically, enabling empirical tests of its reality and magnitude.
The notion of economic growth is based on fairly simple mathematics, but its properties are generally poorly understood: those of exponential curves, which are very different from linear phenomena. It is important – but not immediately obvious – to understand, for example, that an annual growth rate of 3% (apparently low, and which is in order of magnitude the growth rate of world GDP over the last fifty years) leads to a multiplication by 19 in one hundred years, and an accumulation over this period of 620 times the initial quantity.
Applied to the recycling problem, and using a fairly simple mathematical model9we understand that recycling cannot “counteract” the fact that exponential consumption depletes a finite resource. As François Grosse puts it: “If material consumption grows by more than 3% a year – as has been the case for iron and copper over the last century – recycling even 90% of our waste has only a derisory effect on resource conservation.” 10 But the model does enable us to determine the conditions required for recycling to be effective (i.e., according to the author, to delay depletion of the natural resource concerned by at least 100 years). These depend on three parameters: the growth rate of resource consumption (which must be less than 1%), the recycling efficiency and the residence time of the material in the economy. This example clearly illustrates the usefulness of this type of model: it enables us to grasp a non-intuitive phenomenon.
Building and validating theories
To interpret reality, economists develop theories. These are necessarily based on a simplification of reality and the use of abstract concepts (such as consumer, producer, labor market, etc.). Establishing rules, correlations and even causalities between complex phenomena requires starting from simplified (or “stylized”) facts and hypotheses.
From this work of abstraction comes the possibility of mathematical formalization, which most often takes the form of modeling, i.e. a simplified representation of economic reality, or part of it, via a system of equations.
After recalling the importance of the criterion of refutability for a theory to be valid, we’ll see how recourse to mathematical formalization can be useful, and what its dangers are.
The discipline of economics is full of “theories” that don’t pass the test of refutability.
In economics, many “theories” take the form of arguments from authority. Summed up in a self-evident assertion, they are far too imprecise and general to be refutable. The use of mathematical formalism and quantitative data can, moreover, give them an appearance of rigor that is misleading for non-specialists.
The importance of a theory’s refutability
Any theory should ideally be refutable11that is, it must be formulated with sufficient precision to make it possible to :
– check that there are no internal logical flaws in the theory itself;
– check that the hypotheses on which it is based are not too simplistic or too far removed from reality to answer the question under study;
– compare theory and its conclusions with real data.
What is the nature of the relationship – correlation, causality? – between the variables concerned? A simple mathematical model can be constructed and tested.
Here are a few examples12 of assertions that are taken for granted and therefore formulated far too imprecisely to be refutable.
“Money creation is inflationary”
This assertion is regarded as an economic truth, so much so that it has been at the heart of Western central banking for the past four decades. But it has no real consistency. We need to be able to control it quantitatively, by specifying the terms: public money creation, central bank money creation, commercial bank money creation, measured by which indicator, in which currency zone and over which period?
Such quantification makes it possible to refute this assertion and, in this case, to show that it is inaccurate in its generality.
General equilibrium theory
Economists Gérard Debreu and Kenneth Arrow are credited with the theoretical demonstrations of the following three “theorems”:
- The existence of a price system that leads to “general equilibrium” (i.e. the price of goods is such that supply equals demand, and this for all markets) in an economy with pure and perfect competition;
- The fact that this equilibrium is an “optimum” in Pareto’s sense, a situation in which it is impossible to improve the lot of one economic agent without reducing the satisfaction of another;
- The existence of an “initial” allocation of goods that leads to an optimum deemed desirable (from the point of view of efficiency, for example).13).
The exegesis of these works is abundant14 and we do not intend to summarize it here, but to emphasize a few points:
- The terms used in the demonstration (pure and perfect competition, Pareto optimum, market equilibrium, etc.) have a precise meaning in mathematical modeling that does not always correspond to the everyday meaning;
- The mathematical demonstration is indisputable, but it is based on many assumptions.15 the vast majority of which are not verified in practice;
- It would therefore be wrong to claim that these demonstrations prove the superiority of pure and perfect competition. In fact, this is quite contrary to the facts: situations of oligopoly are very common, and mainly for reasons of efficiency (linked to the fact that returns are rarely decreasing, as required by the demonstration);
- It is also highly debatable, although often asserted in economics courses, that this work proves that questions of equity are outside the scope of economics, and should therefore be resolved by “redistributive” political intervention.16 ;
- Some economists, following in the footsteps of Léon Walras, one of the “fathers” of mathematical economics, make pure and perfect competition a “norm”, an ideal to strive for; they thus move, without always saying so, from economic analysis to normative thinking, which is not scientific but, of course, political (see paragraph 3.3). Let’s quote Walras: “Mr. Pareto believes that the aim of science is to get closer and closer to reality by successive approximations. And I believe that the ultimate goal of science is to bring reality closer to a certain ideal; that’s why I formulate this ideal.”17
The comparative advantage theorem
We owe David Ricardo an argument in favor of international trade, known as the comparative advantage theorem, as opposed to the “absolute advantage” theorem formulated by Adam Smith a few decades earlier.
In simple terms, according to this “theorem”, any country would gain from specializing in the production of goods for which its comparative advantage is the highest, i.e., whose relative costs are the lowest, and from buying abroad the goods it does not produce. This is an argument in favor of free trade: all countries stand to gain from free trade by specializing.
This theorem is usually “demonstrated” by representing the economy in an ultra-simplified way, using a basic mathematical model, as Ricardo did (two countries each producing the same two goods with only labor costs, no impact on natural resources, no taxation, no transport, no money, no credit, etc.).
This “model” clearly lacks the ability to represent the real economy in all its complexity, as the video below explains:
No general conclusions can therefore be deduced from this theorem, such as, for example, the superiority of free trade over protectionism in the real world. Subsequent theoretical literature has, of course, sought to complexify this very frustrating initial model, without nonetheless making it possible to conclude18The reality of the situation cannot be encapsulated in a mathematical representation.
Financial markets are efficient
The “demonstration” of theefficiency of financial markets earned economist Eugene Fama the “Nobel Prize in Economics” in 2013, which he shared with another economist, Robert Shiller, who… rightly challenges this idea, claiming on the contrary that markets are exuberant and irrational19.
Here too, the idea has been much talked about.20. Mathematician Nicolas Bouleau has produced a remarkable critical analysis, focusing on semantics and the use of modeling.
In this way, we can see an attempt to link the economic notion of efficiency, which can only mean the “right” allocation of resources (capital, investment) to avoid waste, with mathematical formalizations of random processes (martingales or semi-martingales for discounted prices, Markov processes, filtering).
Nicolas Bouleau’s point here is that the question of market efficiency is a simple one to answer: are credit and savings being put to good use through the simple interplay of markets? The use of extremely complex mathematical tools, impressive though they are mathematically, does not provide a definitive answer. In fact, there are far too many stages to go through before reality can be linked to the formalization employed.
The value of mathematics in building and validating theories
Using a mathematical formulation has several advantages:
This means avoiding ambiguities
Common language is ambiguous, and the first quality of mathematics is to force us to define the terms we use. Unemployment is not under-activity, central bank money is not scriptural money, effective working time is not legal working time, final energy is not primary energy, productive capital is not corporate accounting capital, and so on. These clarifications, expressed here in literary terms, are essential when we move on to quantification.
However, it is important to maintain this lexical rigor in the transition from mathematical models to the real world. To quote mathematician Ivar Ekeland22 : “I always remember what Hector Sussmann said about catastrophe theory, a long time ago: “In mathematics, names are free. One is allowed to call a self-adjoint operator an elephant, and a spectral resolution a trunk, in return for which one can demonstrate that every elephant has a trunk. What we’re not allowed to do is pretend it has anything to do with big gray animals.””
This enables rigorously logical reasoning.
The most remarkable feature of mathematical reasoning is that it can be controlled in an “impersonal” and infallible way: a well-made and controlled mathematical model produces conclusions that are as solidly established as humanly possible, but of course remain a function of the assumptions made. Ultimately, the solidity of a conclusion that emanates from mathematical reasoning is exactly equal to the solidity of the assumptions that were used to initiate this logical reasoning. This property of mathematics should not be underestimated in the field of economics: thanks to it, we can flush out logical errors that are innumerable in everyday language and reasoning, and all the more so as economics is the object of passions, interests and dogmas.
It also makes it possible to establish the conditions of validity of a statement. Mathematical modeling always presupposes hypotheses, and intellectual rigor enables us to make them explicit. It is then possible to check their degree of realism. In the case of the Theory of General Equilibrium (TEG), for example, one of the hypotheses is, as we have seen, that of diminishing returns; once formulated, it is possible to check whether it is valid or not (and, as it happens, it is rather never verified…).
Unfortunately, however, this rigor is not always present, as we shall see. And economic discourse is full of very general assertions derived from demonstrations whose field of validity is very narrow, if not non-existent (as in the case of TEG: we know of no complete market situations23 in pure and perfect competition).
Mathematical formulation enables statistical testing of the ability of statements (or theory) to account for empirical data.
Econometrics is the branch of economics that attempts to verify whether abstract statements reflect empirical data. If you assert that variable A is a function of variable B (mathematically, A = f(B), f being a mathematical function that can take many forms), econometric tests allow you to verify this assertion, by analyzing the values taken by variables A and B over the last N years.
In this way, for example, it is possible to test the hypothesis that growth in GDP requires growth in primary energy consumption. This assertion can be tested for given periods, given territories, etc. These econometric tests have the advantage of eliminating the theoretical statements they invalidate (for example, the statement that money creation is inflationary) and of being based on virtually no hypotheses (they simply confront reality with test hypotheses). However, in most cases, they do not allow us to conclude causality, but only correlations.
Correlation is not causation
Let’s take two examples from the Finance for All website.
Ice cream sales and sunburn both increase during the summer. They are correlated. However, it would be absurd to conclude that sunburn is caused by eating ice cream, or that getting sunburned is a reason to buy ice cream. The simultaneous increase in these two variables can actually be explained by a third one: good weather.
Here’s an economic example: is public debt bad for economic growth? In their 2010 article Growth in a time of debt, Carmen Reinhart and Kenneth Rogoff, two economists renowned for their work on macroeconomics and international economics, published a paper arguing that economic growth was slower in countries with a public debt-to-GDP ratio of over 90%. Should we conclude from this that high debt levels lead to slower growth, and that policies should therefore be implemented to reduce this ratio to below 90%?
Subsequent studies have contradicted the main result of Reinhart and Rogoff’s article (which was, moreover, marred by major errors revealed by a young PhD student by the name of Thomas Herndon).24which triggered a media storm). Many of them also refuted, using complex statistical models, the existence of a causal link between high public debt and low economic growth. An article by John Irons and Josh Bivens25 even demonstrated an inverse causal effect: slow economic growth would lead to higher public debt.
To get around this difficulty and demonstrate causality, some economists resort to “natural or random experiments”.26.
In these experiments, we compare the evolution of a parameter P1 (the unemployment rate, for example) for two similar groups of individuals, except for another parameter P2 (the level of unemployment benefit, for example), which we change for group A. The parameter P1 thus evolves in both situations A and B, and if it evolves differently in A than in B, it’s “because” of the parameter P2 that we’ve varied (and it’s therefore a causality, not just a correlation). In this case, we can see that the effect exists but is very weak.
Econometrics uses a wide range of methods27 linear regression, time series analysis, instrumental variable models, regression with discontinuity, natural instrumental variable models.
A large body of econometric work attempts to answer the question of causal attributions, particularly in the field of public policy evaluation. However, these methods are debatable and not always applicable.
Like the climate, the economic system is not based on proven physical sciences (thermodynamics, fluid mechanics, radiation physics, atmospheric physics, particle physics, chemistry, etc.). The attribution of causes in econometrics is necessarily based on debatable hypotheses and models (see section 3.1), and comes up against limitations inherent in the data available. If the authors of the IPCC Group 3 report on policies to mitigate climate change (2022) dare to assert that public policies have played a role in reducing the growth rate of greenhouse gas emissions in recent years, it’s because they have taken into account multiple studies with different approaches, converging nevertheless on this conclusion (important, because it’s really useful to be able to say whether these policies are effective or not). However, this conclusion remains more fragile than those of Group 1, which focus on climate physics.
Causal attribution in climate science
The science of climate “attribution” has made enormous progress. It is now possible to attribute climate change to the occurrence of extreme events (hurricanes, heatwaves, torrential rains, etc.). A global organization, the World Weather Attributionwhich uses probabilistic methods introduced by the pioneering work of Peter Scott (of the Hadley Centre) in the wake of the 2003 European heatwave. The extraordinary complexity of the climate system does not prevent us from attributing the occurrence of various events to ongoing warming.
The dangers of mathematics in economics
Recourse to mathematics has the advantages we have just seen, but is not without its dangers. Let’s take a look at some of the more salient ones.
Abuse of authority
The use of mathematics in economics leads to abuses linked to the authority conferred – unconsciously or not – on mathematical language, which is difficult to access and unbelievably powerful. Mathematics can also be used to hide absurd or grossly simplistic hypotheses beneath abstract formulations. We won’t say “we assume that all consumers are identical and always buy the same thing, regardless of their income”, but “consumer preferences are homogeneous and of degree 1”. Steve Keen’s book L’imposture économique Les Éditions de l’Atelier, (2017) is full of examples of this type, which are well within the realm of imposture.
The mathematization of a discipline adorns it ipso facto with the garments of science and qualities of solidity, which can however be largely usurped.
“Scientific” does not mean scientific.
Of course, all academic disciplines are difficult to access, and we can’t reproach them for being so for “laymen”. But when economists promote public policy proposals with societal consequences, citizens can’t be satisfied with arguments of authority, even when supported by mathematical apparatus. As we saw in the previous examples, mathematical demonstrations of general equilibrium or market efficiency do not allow us to draw conclusions for the real economy.
Confusing the use of mathematics with scientificity
The use of mathematics can lead people to believe that economics is a science, like physics, and that its statements, provided they are peer-reviewed in a scientific journal, have the same solidity as the most rigorous physical statements. Without claiming to account for the lively debate28 within the discipline of economics, we can make a few observations.
The publication of an article in an academic journal normally implies that it has been reread and peer-reviewed. This clearly limits the risk of errors in calculation and reasoning; it also makes it possible to reproduce equivalent work on other data sets (in time or geography). When faced with an economic statement, it is therefore always advisable to make sure that it has been published in a “serious” journal (which has its publications reread in accordance with the relevant rules).
But this cannot lead to definitive conclusions about the “truth” of the work’s findings. Peer review is in fact there to check that the mathematical reasoning is correct, and that the initial set of hypotheses is “standard”. It never allows us to say that the hypotheses are good or bad, but that they are the usual hypotheses of the journal. However, the validity of the result depends on the veracity of the hypotheses; from this we can immediately deduce the limits of these publications…
This is unfortunately true, even when the work is based on controlled empirical data, and even when the journal is among the most recognized in the world. On the one hand, economics will always remain a human discipline, where values and “biases” are inevitable. On the other hand, academic journals are generally controlled by “schools of thought” that impose canons of publication, and where attention is paid more to compliance and orthodoxy than to validity.29. Let’s quote economist Steve Keen’s account of Nordhaus:
As any academic knows, once you are published in a field, you will be selected by journal editors as a reviewer for that field. So, instead of providing an independent check on the accuracy of research, peer review can be used to impose a hegemony. As one of the first of the very few neoclassical economists to work on climate change, and the first to provide empirical estimates of the damage caused to the economy by climate change, Nordhaus was able to frame the debate and play a gatekeeper role.
Finally, there’s a bias that can only be spotted when you’re part of the research community. Mathematical economists want to shine in the eyes of their peers. Very often, they build their model by intentionally putting it between two zones: complex enough that solving the model could lead to a publication (or even a Nobel!), but not too complex, which would make it too complicated to solve (and it would be a fellow mathematician who would get the credit for the paper). As a result, these economists don’t build a model that describes reality, but a “nice”, “beautiful”, “interesting” model that, in passing, perhaps describes reality a little (which is often much simpler, or much more complicated than the model suggests).
Conversely, there is a lot of useful and profound thinking in economics, but it doesn’t get published in the most reputable journals (often because it doesn’t fit the “canon”). Just think of the fact highlighted by economists Nicholas Stern and Andrew Oswald in 201931 of the 77,000 or so articles published by the 10 most influential economic journals in the discipline, only around sixty dealt with the climate. This simple fact means that all work on these issues (and there are countless of them, see for example those published by the journal Ecological economics) are not published in these journals.
This means that, unlike in the physical sciences where the hierarchy gives an indication of the likely quality of work (without, of course, being an absolute criterion), the hierarchy of journals in economics simply does not.
Confusion between positive and normative content
As mentioned in section 2.1, Léon Walras, one of the fathers of mathematical economics, believed that the aim of economic science was to “bring reality closer to a certain ideal”. Whereas the primary aim of the experimental sciences is to understand reality as it is (“positive” disciplines), he saw economics as a normative discipline.
Economic advisors who are listened to by politicians even have a “performative” action.
Economists’ results are performative, i.e. their prescriptions modify, through economic policies, the phenomena they observe. “Whereas Newton’s laws changed nothing about gravitation”32
The implications of this vision of economics are profound.
Let’s take the most striking example: since, in theory – and subject to numerous assumptions – markets in pure and perfect competition (Walras’ ideal) lead to an optimal equilibrium, economic policy must ensure that reality aligns with the model’s assumptions, rather than the other way round. This involves, for example, limiting “rigidities” in the labor market (i.e., administrative regulations) to get closer to pure and perfect competition in this area; or “completing” financial markets by encouraging the creation of new products (usually derivatives ).
Economics courses teach numerous “rules” to be applied as a basis for economic policies. This, of course, is not in itself objectionable. The economic world is complex, the decisions to be made are difficult, and the application of rules helps decision-makers. These rules are often based on a mathematical model that gives them extra authority. Insidiously, mathematization moves economics from a “positive” or “explanatory” mode to a “normative” mode, and the “rule” is imposed as truth.
Let’s look at a few examples. According to Hotelling’s rule the price of a non-renewable resource increases exponentially at a rate equal to the interest rate, making the resource inexhaustible (with prices reaching such levels that consumption becomes impossible). One might conclude from this that non-renewable resources never run out. But Hotelling’s demonstration is based on a number of assumptions that have not been verified in practice; Ivar Ekeland and his co-authors34 have shown that the conclusion no longer holds as soon as one of the assumptions is replaced by another, more realistic one.
Ramsey’s rule provides a framework for determining the “public discount rate” to be used in socio-economic analyses (thus informing the public decision on the appropriateness of a public investment). This rule is based on a necessarily very crude model of the economy. Nevertheless, it is used as if it were self-evident, thanks to the reputation of the author and his model.
Examples include, but are not limited to, the Taylor rule for the conduct of monetary policy by central banks, and Phelps’ golden rule, which sets the interest rate at the level of the population growth rate.
Mathematical formalization can hide the fragility of empirical data
Mathematical models are abstractions. As we have seen, it is possible to check their validity by comparing them with real data.
However, it’s important to note – because this is a very general problem – that economic “data” is “socially constructed”. As Gaël Giraud and I wrote in a note on economic and climate models35 : ” “These data are therefore dependent on the performance of institutions, which can be very weak in some countries. Developing countries lacking human resources sometimes have a very inadequate and heterogeneous statistical apparatus. Countries with “strong” political power may produce debatable statistical data, due to a lack of transparency in their production and doubts about the influence of power on these data. Even in democratic and administratively well-organized countries, data pose serious methodological problems. Take, for example, inflation (in the sense of consumer price inflation). This indicator is based on statistical data that have been restated to take account of changes in product quality (from plain yoghurt to fruit yoghurt?), which raises formidable methodological questions. Taking into account the cost of housing (for owner-occupiers) also poses major methodological problems. There’s also the question of defining and measuring capital, and its depreciation, which is essential in growth models, as well as that of unemployment. Mention should also be made of the whole “parallel” economy, which escapes official statistics, even though studies allow estimates to be made and the Eurostat institute even goes so far as to recommend that it be included in national accounts”.36“, which France refused to do” “.
It is therefore important to ensure, when drawing conclusions from mathematical work supported by numerical data, that the possible consequences of errors or uncertainties in the data are taken into account.
Decision support for public authorities
The purpose of economics (as a discipline) is to understand the economic world and to help public authorities make the most appropriate decisions in line with the objectives they set for themselves – or are given.
The economist John Maynard Keynes is credited with raising awareness of the need for government intervention in the economy. This was not self-evident before the 1929 crisis. The prevailing thinking was based on the idea that the role of public authorities was, above all, to give full scope and freedom of manoeuvre to private players coordinated by the markets. This is why the “demonstration” of General Equilibrium Theory was so important: it gave scientific lustre to the view that “laissez-faire ” was the royal road to economic progress.
As soon as this concept came up against reality, the question posed to economists was transformed: economists asked themselves what were the most appropriate ways for the state to intervene. The following are just a few examples of how mathematics can be used to help public authorities in their various roles.
Economic forecasting and budget models; forecasting and macroeconomic models
The nature and intensity of government economic intervention depend first and foremost on its knowledge of economic activity. Public economists and statisticians have therefore developed tools for measuring this activity, such as GDP and, more generally, national accounting. These tools make it possible to characterize the state of health of the economy (according to the criteria supposed to allow us to judge this health, the choice of these criteria being obviously debatable).
The main business models used
Economists are also developing mathematical models for short-term macroeconomic forecasting, such as the Mésange model model and the Opale model model (accompanied by a set of tools called Tresthor). The aim of these models is to forecast the level of economic activity, as well as to calculate public spending and tax revenues, based on the decisions taken. Some models are more specialized, such as the Inès modelmodel, used to measure the impact of tax reforms, the Destinie modelmodel, whose main applications concern pensions, or the Saphir modelmodel, which describes the impact of social benefits and taxes on household incomes.
Public authorities, embodied by various institutions (the European Commission, finance ministries, statistical institutes such as INSEE in France, central banks, public agencies such as ADEME in France, etc.), develop or have developed economic models in an attempt to anticipate the medium- and long-term future. In the field of energy and ecological transition, this work is used to improve public policies as a whole. The European Green Deal and Fit 55 were built using models such as Quest on the macroeconomic level and Premiums for energy.
The Network of Central Banks and Supervisors for the Greening of the Financial System (NGFS), whose secretariat is provided by the Banque de France, uses a range of models, including the macroeconomic model NIGEM to assess the resistance of bank balance sheets to the effects of climate change.
The French administration uses models such as ThreeMedeveloped by ADEME and OFCE, to assess the economic impact of the National Low-Carbon Strategy (SNBC), and the POLES model to build the Pluriannual Energy Program.
The IPCC, as part of Group III, which studies the public policies and private actions needed to mitigate climate changesynthesizes the economic simulations made by the many models developed37 in numerous economic laboratories around the world. The majority are so-called “integrated” models, or IAMs38which attempt to represent the economy and its links with climate. Their aim is to answer the question of the reciprocal impacts of climate and economy, with priority given to the link between economy and climate.
Without going into an exhaustive critical analysis of all these different types of models35we would like to highlight a few points.
- These models are not based on well-established physical laws. They all incorporate debatable theories and equations of economic agent behavior. They are also unable to take account of extra-economic events (such as the Covid pandemic, the war in Ukraine, or extreme climatic events like the historic drought in Spain in 2023). Finally, the financial sector can generate “endogenous” crises (i.e. arising from the behavior of its players and not from an external event), which are also extremely poorly captured by the most widely used models.39.
- These models may not represent entire sectors of economic life, but they are obviously relevant in reality. The vast majority majority of models aimed at making short-term macroeconomic forecasts take neither natural resources nor pollution into account. They do not represent credit or money. More generally, such models necessarily have “biases” which are not always explicit. Their results are highly sensitive to these biases.
- Models that hope to represent economic reality (or, more precisely, the set of empirical data that is supposed to represent this reality) are generally designed to “fit” with data observed over the past few years. To achieve this, modellers “calibrate” the model’s equations.35In other words, they give the parameters values that lead to the best possible fit between the model’s behavior and the empirical data. The fact that models “fit” with empirical data in past years is no proof of their ability to do so in the future, which can be visualized using the following analogy. Let’s try to represent a tennis player’s series of victories and defeats in his last ten matches by a toss of coins. If we take enough coins and flip them 10 times in a row, we’ll end up with a coin that has turned heads for every match the player has won and tails for every match he has lost. However, this coin has no predictive power over the player’s next match.
Building and comparing regulatory instruments
Formalized economic analysis can also be used to implement regulations, and even to compare their relative effectiveness. Take pollution reduction policies, for example.
The economist Arthur Pigou came up with the idea of introducing an environmental tax, based on what is now known as the polluter-pays principle. Making the polluter bear the cost of pollution, via a tax based on the quantity of pollutants emitted, encourages him to reduce it, which is not the case if he does not bear the cost (as in the case of a factory that emits polluting effluents into a downstream river or into the air).
In the 1960s, the economist John A. Dales, following in the footsteps of the theoretical work of “Nobel Prize” winner Ronald Coase40proposed the use of the market41. He advocated the state allocation of “rights to pollute”, and set out a “theorem42 according to which this mechanism leads to optimal allocation, whatever the initial allocation of these rights. This approach gave rise to quota markets (for SO2 in the USA, then forCO2 in Europe).43).
An abundance of economic literature has sought to compare the respective merits of a carbon tax and an emissions trading scheme in terms of efficiency and/or equity. Models are sometimes used44 other times, empirical quantitative analyses are used.45. In this case, it’s quite clear that conceptual or empirical analyses are questionable: we lack the hindsight to assess the effects of instruments, which also depend on numerous institutional, contextual and behavioral factors. This did not stop Jean Tirole, “Nobel Prize winner in economics”, from writing in 2009 in a major report by the Conseil d’analyse économique, Climate policy: a new international architecture : “In collaboration with Jean-Jacques Laffont, I looked into this problem and examined the optimal public policy. We concluded that the social optimum was to issue tradable emission rights […].” The article in question46published in 1984, is based on a mathematical model that is undoubtedly well-constructed, but the claim to conclude from it what is socially optimal leaves us wondering, and leads us to insist on the idea that these models can, at best, help to understand mechanisms, to highlight some of their properties, to compare them, but in no case do they allow us to conclude that one of them is socially optimal.
Conclusion
The use of mathematics in economics is useful, as we hope we’ve shown, in pedagogical and theoretical terms, as well as for minimum control of theoretical statements. However, great caution is called for when this use becomes abusive, forgetting its limits and conditions of validity.
- The first attempts at a mathematical formalization of economics were made by Léon Walras in his book Elements of Pure Political Economy (1874), using very simple linear methods. Since then, numerous mathematical fields have been used in economics: game theory, differential calculus and optimization, integral calculus, matrix calculus, probability, statistics, time series and econometrics, numerical calculation and algorithms… ↩︎
- Other social sciences, such as certain branches of sociology, also use mathematics. ↩︎
- See, for example, Alain Supiot’s books, La Gouvernance par les nombresFayard, 2015; Olivier Martin L’empire des chiffresDunod, 2020 and Fabrice Boudjaaba et al. De la mesure en toutes chosesCNRS Éditions, 2021. ↩︎
- See, for example, the work of Christian Walter, within the Éthique et finance Chair, and his article Éthique et finance : le tournant performatif published in the journal Transversalités (2012/4 n°124 pp.29 to 42). ↩︎
- A variable is said to be of interest when it (or one of the variables) is the subject of the study. They are also called response variables. ↩︎
- A parameter is a numerical value that is not calculated by the model and is not a measured or observed input variable. This parameter can be evaluated on the basis of theoretical studies or on an empirical basis. ↩︎
- See, for example, the “What is optimization modeling?” page on the IBM website: the optimization example given there is that of a classic parcel delivery problem. The aim is to minimize fuel consumption by delivering parcels to various customers in a city. Seeking this minimum is optimization. And to achieve this, it is useful to mathematically model the possible routes and fuel consumption between each delivery point. The site gives other examples of optimization in different sectors. ↩︎
- See Georges Ifrah, Histoire universelle des chiffres, l’intelligence des hommes racontées par les nombres et le calcul, Payot, 1996. ↩︎
- See François Grosse Is Recycling “Part of the Solution”? The Role of Recycling in an Expanding Society and a World of Finite Resources Sapiens, (2010) ; Le découplage croissance / matières premières – De l’économie circulaire à l’économie de la fonctionnalité : vertus et limites du recyclage, Futuribles (2010). ↩︎
- Interview with François Grosse, Iron, copper, aluminum… Mineral resources will be exhausted within 50 to 70 years if our consumption continues to grow at the current rate. L’usine nouvelle (23/05/2023). See also his book Croissance soutenable, la société au défi de l’économie circulairePUG, 2023. ↩︎
- Philosopher Karl Popper introduced the criterion of refutability as a means of demarcating science from other intellectual disciplines. Much has been written about this idea. In Popper’s view, this criterion is indeed very demanding (see the presentation of this criterion and the resulting debate on Wikipedia. We won’t go into that discussion here. The definition of refutability used here is weaker than that of Popper. ↩︎
- Of course, we don’t aim to be exhaustive here. Economics is rich in rules and “laws” that are debatable, yet still taught, even applied, and often thought to be “true” (Taylor’s rule, Ramsey’s rule, Phelps’ rule, Okun’s law, Say’s law, etc.). ↩︎
- In economics, efficiency generally means the “best” allocation of resources (capital and labor), i.e. the one that enables maximum production. ↩︎
- See for example: L’imposture économiqueSteve Keen, Éd. de l’Atelier, 2017 ; Composer un monde en communGaël Giraud, Le Seuil, 2022; Claire Pignol, La théorie de l’équilibre général : un soutien ambigu du libéralisme économique, Alternatives Économiques (01/04/2019), Post-Keynesian economicsMarc Lavoie, Virginie Monvoisin, Jean-François Ponsot, La Découverte, 2021; L’illusion économiqueBernard Guerrien, Omniscience, 2007. ↩︎
- Vincent Desreumaux, La théorie de l’équilibre général : un colosse aux pieds d’argile, reading note on Claire Pignol’s book: La théorie de l’équilibre général, Revue de la Régulation, 2018. ↩︎
- Economist Pierre-Noël Giraud writes: “Following Ricardo, I consider that the central object of economics is inequality of income and, more generally, of access to the goods of this world, and not growth, the measurement of which is moreover difficult and rightly controversial”. (L’homme inutile ; du bon usage de l’économiePierre-Noël Giraud, Odile Jacob, 2015). ↩︎
- For more on this see Irène Berthonnet, Thomas Müller, Mathematics as tool or revealer: reconstructing a silent debate between Walras and Pareto, Revue de philosophie économique, 2016. ↩︎
- On this subject, see Gaël Giraud, L’épouvantail du protectionnisme, Revue Projet, 2011. ↩︎
- See his book Irrational exuberance(Robert J. Schiller, Princeton University Press, 2000, latest edition 2016); French version published by Valor Éditions; see also Marie-Pierre Dargnies et al, Robert J. Shiller – L’exubérance irrationnelle des marchés, in Les Grands Auteurs en Finance, EMS Editions, 2017. ↩︎
- See for example Michel Albouy, Peut-on encore croire à l’efficience des marchés financiers, Revue française de gestion, 2005; Bernard Guerrien, L’imbroglio de la théorie dite “des marchés efficients” (download in .doc), 2011; Gaël Giraud, Financial illusionÉditions de l’Atelier, 2014. ↩︎
- Nicolas Bouleau, Critique de l’efficience des marchés financiers, blog Connaissance et pluralisme (23/05/2013) ↩︎
- Personal communication with Alain Grandjean. E-mail exchanges, 2023. ↩︎
- A market is said to be complete if it allows a single price to emerge for each good (defined in detail), present and future. Who knows what the price of a butter croissant will be in 2045? This assumption of market completeness is, of course, never satisfied. See Gaël Giraud, Illusion financièreÉd de l’Atelier, 2014. ↩︎
- This work had been published in one of the world’s leading journals, the American Economic Review, which therefore overlooked the errors. Is this due to the notoriety of the authors? In any case, it’s a good example of the fact that it’s not enough for an article to be accepted in a prestigious journal for it to be considered accurate. Conversely, Thomas Herndon’s work shows the value of detailed peer review, which is not always carried out. Further reading: Robert Pollin, Public debt, GDP growth, and austerity: why Reinhart and Rogoff are wrong, LSE blog, 2014; Thomas Herndon et al. Does high public debt consistently stifle economic growth? A critique of Reinhart and Rogoff Cambridge Journal of Economics, 2014; and the video Les politiques d’austérité: à cause d’une erreur excel? on the Youtube channel ScienceEtonnante. ↩︎
- Government Debt and Economic Growth – Overreaching Claims of Debt “Threshold” Suffer from Theoretical and Empirical Flaws, EPI Briefing Paper, 2010 ↩︎
- These methods, popularized by “Nobel Prize” winner Esther Duflo, are the subject of academic evaluation and debate. See, for example, Sacha Bourgeois-Gironde and Éric Monnet’s article, Expériences naturelles et causalité en histoire économique – Quels rapports à la théorie et à la temporalité? Annales. History, Social Sciences , 2017. ↩︎
- See, for example, Pauline Givord, Méthodes économétriques pour l’évaluation de politiques publiques, Économie & prévision, 2014, as well as introductory books oneconometrics. ↩︎
- In France, the debate reached a climax with the publication of the book by economists Pierre Cahuc and André Zylberberg Le négationnisme économique, et comment s’en débarrasserFlammarion, 2016, and the response from Les économistes atterrés, Misère du scientisme en économieÉditions du Croquant, 2017. It is discussed in the book by the Scientific Council of the Fondation pour la Nature et l’Homme., What kind of science for the world to come? Faced with climate change and the destruction of biodiversity ed. Alain Grandjean, Odile Jacob, 2020. ↩︎
- Indeed, even in empirical studies, journals never check the data or the code that produces the analyses (which partly explains why the error made by Reinhart and Rogoff, mentioned in paragraph 2.1, was not detected by the journal). ↩︎
- Steve Keen, The appallingly bad neoclassical economics of climate change, Globalizations, 2020 ↩︎
- Andrew J. Oswald, Nicolas Stern, Why does the economics of climate change matter so much, and why has the engagement of economists been so weak?, Royal Economic Society, 2019. See also their article in VoxEU, Why are economists letting down the world on climate change? in which the authors summarize their remarks and challenge their colleagues. ↩︎
- Quote from Pierre-Noël Giraud (Mines Paris Tech, Dauphine-PSL). ↩︎
- Faced with the climate crisis, economics needs to reinvent itself, Le Monde (19/05/2023) ↩︎
- Ivar Ekeland, À quand la fin du pétrole, Alain Grandjean’s blog, 14/04/2023; popularization of the working paper he co-authored with Wolfram Schlenker, Peter Tankov and Brian Wright, Optimal Exploration and Price Paths of a Non-renewable Commodity with Stochastic Discoveries, 2023. ↩︎
- Alain Grandjean, Gaël Giraud, Comparaison des modèles météorologiques, climatiques et économiques : quelles capacités, quelles limites, quels usages ? Working paper, Energy and Prosperity Chair, 2017. ↩︎
- Insee will not include drug trafficking and prostitution in the calculation of French GDP, Le Monde, 18/06/2014. ↩︎
- The scenario database used for the IPCC’s sixth synthesis report (2022) includes 188 models. See the AR6 Scenario Explorer and Database page on the International Institute for Applied Systems Analysis (IIASA) website. ↩︎
- See the Shift Project note, Understanding the challenges of energy-climate-economy modeling, 2019. ↩︎
- In particular, general equilibrium models are conceptually incapable of accounting for an endogenous economic crisis, since their core is the automatic return to equilibrium. What’s more, the majority of them do not represent the monetary sphere or credit. ↩︎
- See Ronald Coase, The problem of social cost, Journal of Law and Economics, 1960. ↩︎
- In Pollution, Property and Prices, University of Toronto Press, 1968. ↩︎
- This “theorem” has never been formally demonstrated, and has been the subject of a large body of critical literature, sometimes using mathematical apparatus. ↩︎
- See Olivier Godard, L’expérience américaine des permis négociables, Economie Internationale, CEPII, 2000. ↩︎
- See for example Julius Andersson and Giles Atkinson The distributional effects of a carbon tax: The role of income inequality, Centre for Climate Change Economics and Policy and Grantham Research Institute on Climate Change and the Environment, 2020. ↩︎
- See for example Jeremy Carl, David Fedor, Tracking global carbon revenues: A survey of carbon taxes versus cap-and-trade in the real world, Energy Policy, 2016. ↩︎
- Jean-Jacques Laffont, Jean Tirole, Pollution permits and compliance strategies, Working paper, MIT department of Economics, 1984. ↩︎