TL;DL
Genetic algorithm works like a charm when implementing into an agent-based model. This allows for learning in simulation.
Economics study is mostly about optimizing ones’ utility, whether the agent is assumed to have perfect foresight or to be myopic. A classical and typical way of approaching this is by defining a utility function, or sometimes a lifetime utility function, and mathematically derive the corresponding decision rule an agent should make under certain circumstances.
However, while mathematics has its advantages in being analytically more obvious in interpreting the mechanisms, it inevitably limits how agents perceive informations in a model. In macroeconomics, assuming that an agent knows how the model works and takes all the information one could possibly obtain into consideration, also known as rational expectation, greatly simplifies the math, and economists accepts this paradigm for a good reason. It works pretty well most of the time, qualitatively and quantitatively, on topics such as growth and labor economics, but under cases where irrationality and disequilibrium are the critical interests, financial instability and bubbles for instance, the rational expectation design turns out to be a restriction.
Example of Limitation under Rational Expectation
In Getler and Kiyotaki (2015), a dynamic bank run is considered. Under the no-run periods, banks “obey” the incentive constraint(IC) to not deviate too much. But during a bank-run case, the IC is no longer binding, causing a much lower liquidation price of the asset, and “turns into” a bank-run economy. Something quite unintuitive to me is how a economy turns into the other case, and indeed, the authors tackle this “disequilibrium” by
Starting from the end of the simulation and working backwards; compute the path of the economy after a run happens back to steady state
, as quoted from a comment in their Matlab code for plotting the impulse response function.
It is unsatisfying how we have to work this around ex-post, instead of naturally observe the chronological behavior towards instability.
Agent-Base Model
One solution is called “agent-based modeling”, or ABM.
What this basically does is to define only the behavior of an agent, including decision on money holding , interaction with neighbors, action when seeing the price fall, and etc.. We then place these “automated unit of agent” into simulation, and observe the aggregate behavior of the economy.
See these agents as NPCs in a video game. They interact with each other, they make decisions; as a modeler you just assign the parameters, for example locations, initial wealths, perhaps staminas(which determines the number of interactions per unit time), just like Civilization.
Back to a bank-run case. One simple arrangement is
If 40% of my neighbors want to withdraw this period, then no matter whether I am patient enough to save for tomorrow, I will withdraw as well.
After implementing this simple zero-intelligence agent into the model, and perhaps add some shocks on some area of the agent, a modeler can then observe how a stable economy collapses into a financial crisis. A recent implementation of bank run using ABM is Santos, T.R.E.d., Nakane (2021).
An interesting article using ABM is the 2022 Ig Nobel Prize in Economics, where they handled the question “Which Is More Important: Talent or Luck?”
Make the Agents Smarter
One problem for the traditional ABM is that agents are “too stupid”, with a terminology “zero-intelligence”. Agents follow a simple rule, not learning to improve their strategy in order to make the most of the current opportunity or predicament.
Of course, one solution is to “improve” their decision rule, with the own of a modeler. This is in fact what some of the financial institutes such as Santa Fe Institute did back in the 90s (I might be wrong). They derive the Euler equation according to economic theories, and set this as the decision rule for agents. In some research, the researchers even implement the Black-Scholes Model into agents.
Another path is to let the agents “learn”. Upon the many implementations, such as reinforcement learning (using gradient), here I will introduce Genetic Algorithm.
Genetic Algorithm
Each agents is treated as a living thing, with a “gene” inside, usually represented in array of 0 and 1s, and a “fitness” value. The idea is that agents with a better fitness value in this environment indicates that it has a better gene, and cruelly the agents performing better have a higher chance of mating and have offsprings. This process continues, and by evolution we are expected to create a “gene pool” with only the toughest genes.
A genetic algorithm usaully contains the following three (+1) steps:
- Selection — The higher the fitness, the higher the opportunity to get a mate
- Crossover — The parent agents give birth to two children, with a segment of gene swapped
- Mutation — With some chance the offspring’s gene is mutated.
- (Election) — The top two of parent and children are preserved.
The last step is rather field-specific because it is only implemented in a paper I am about to introduce.
Currency Substitution
In the example of an article from Jasmina Arifovic, Evolutionary Dynamics of Currency Substitution, a genetic algorithm is implemented to represent the decision making process of a young agent.
The agent is attached with a 30-bit “string”, with the first 20 bits decoded as its consumption, and the last 10 bits decoded as its proportion of holding one currency over the other.
A quick demonstration with python code is as following
consumption = string[:20].dot(1 << np.arange(20))
portfolio_1 = string[20:].dot(1 << np.arange(10))
saving = endownment - consumption
hold_1 = saving * portfolio_1
hold_2 = saving * (1 - portfolio_1)
This decision process for a young agent is repeated for each step of the model, and after each agent gets old, they evaluate how fabulous they live in this economy, pass the genes down(if capable) to a new generation, and the new generation is heuristically better in the next generation.
Insights
When using the rational expectation method, a country printing more money to accommodate its deficit actually drives out the currency that corresponds to a more restrictive monetary policy, according to the paper. Arifovic does some experiments in a lab (with human), and got a different result. Under the experiments, after several rounds of leaning , the more restricted currency turns out to be the one surviving.
This simple agent-based model gives a closer result with the human experiments, as one currency is degenerated (hyperinflation), and agents eventually decide not to hold any of those.
The key here lies in an overlapping generation design, and the ability of an agent to learn to perform better over time, using the concept of evolution.
More detail can be found in her original paper.
Will Genetic Algorithm be a Trend
Genetic algorithm comes in handy during an agent-based model, in which the modelers don’t want to explicitly make decisions for the agent, and greedily let them search for the optimal behaviour. Since the concept of evolution — selection, crossover, and mutation, origins from living things, it is somehow intuitive to link it to the design of agent-based modeling.
This enlarges the tools for economics research in game theory, asset pricing, etc., where we might want the agents to learn in some natural manner.
However, outside of this agent-based setting, there are genuinely better tools to “train” a model to find an optimization point from data, in context of econometric, gradient-based learning (ex: BHHH).
Moreover, as long as economists are seeking for a concrete “object”, usually being a mathematical equation, to explain a phenomenon and elaborate it to people who may concern, agent-based modeling might not meet their needs philosophically, and let along this mysterious and mercurial process of synthetic evolution.
Appendix
I’m sharing an implementation of the genetic algorithm of Arifovic (2001) on github:
The genetic algorithm section is isolated from the main model, different from the example code I showed in the article. This allows for the algorithm to be reused, and at the same time allowing my model to implement a different genetic algorithm design (perhaps a more intense mutation process), if I wanted. This is called a dependency injection, or a “strategy pattern”, in software engineering.