Free Novel Read

Complexity and the Economy Page 2


  development. It occurred to me, gradually at first, that John and I could

  design a primitive artificial economy that would execute on my computer, and

  use his learning system to generate increasing sophisticated action rules that would build on each other and thus emulate how an economy bootstraps its

  way up from raw simplicity to modern complication. In my mind I pictured

  this miniature economy with its little agents as sitting in a computer in the

  corner of my office. I would hit the return button to start and come back a

  few hours later to peer in and say, oh look, they are trading sheep fleeces for obsidian. A day later as the computation ran, I would look again and see that

  a currency had evolved for trading, and with it some primitive banking. Still

  later, joint stock companies would emerge. Later still, we would see central

  banking, and labor unions with workers occasionally striking, and insurance

  companies, and a few days later, options trading. The idea was ambitious and

  I told Holland about it over the phone. He was interested, but neither he nor

  I could see how to get it to work.

  That was still the status the following summer in June 1988 when Holland

  and I met again in Santa Fe shortly before the program was to start. I was keen to have some form of this self-evolving economy to work with. Over lunch

  at a restaurant called Babe’s on Canyon Road, John asked how the idea was

  coming. I told him I found it difficult, but I had a simpler idea that might be feasible. Instead of simulating the full development of an economy, we could

  simulate a stock market. The market would be completely stand-alone. It

  would exist on a computer and would have little agents—computerized inves-

  tors that would each be individual computer programs—who would buy and

  sell stock, try to spot trends, and even speculate. We could start with simple agents and allow them to get smart by using John’s evolving condition-action

  Pr eface [ xiii ]

  rules, and we could study the results and compare these with real markets.

  John liked the idea.

  We began in the fall, with the program now started, to build a computer-based

  model of the stock market. Our “investors,” we had decided, would be indi-

  vidual computer programs that could react and evolve within a computer that

  sat on my desk. That much was clear, but we had little success in reducing the market to a set of condition-action rules, despite a number of attempts. The

  model was too ad-hoc, I thought—it wasn’t clean. Tom Sargent happened to

  be visiting from Stanford and he suggested that we simply use Robert Lucas’s

  classic 1978 model of the stock market as a basis for what we were doing. This worked. It was both clean and doable. Lucas’s model of course was mathematical; it was expressed in equations. For ease of analysis, his investors had been identical; they responded to market signals all in the same way and on average correctly, and Lucas had managed to show mathematically how a stock’s price

  over time would vary with its recent sequence of earnings.

  Our investors, by contrast, would potentially differ in their ideas of the

  market and they would have to learn what worked in the market and what

  didn’t. We could use John’s methods to do this. The artificial investors would develop their own condition/forecast rules (e.g., if prices have risen in the

  last 3 periods and volume is down more than 10%, then forecast tomorrow’s price will be 1.35% higher). We would also allow our investors to have several such rules that might apply—multiple hypotheses—and at any time they

  would act on the one that had proved recently most accurate of these. Rules

  or hypotheses would of course differ from investor to investor; they would

  start off chosen randomly and would be jettisoned if useless or recombined to

  generate potential new rules if successful. Our investors might start off not

  very intelligently, but over time they would discover what worked and would

  get smarter. And of course this would change the market; they might have to

  keep adjusting and discovering indefinitely.

  We programmed the initial version in Basic on a Macintosh with physi-

  cist Richard Palmer doing the coding. Initially our effort was to get the sys-

  tem to work, to get our artificial investors to bid and offer on the basis of

  their current understandings of the market and to get the market to clear

  properly, but when all this worked we saw little at first sight that was different from the standard economic outcome. But then looking more closely, we

  noticed the emergence of real market phenomena: small bubbles and crashes

  were present, as were correlations in prices and volume, and periods of high

  volatility followed by periods of quiescence. Our artificial market was showing real-world phenomena that standard economics with its insistence on identical agents using rational expectations could not show.

  I found it exciting that we could reproduce real phenomena that the stan-

  dard theory could not. We were aware at the time that we were doing some-

  thing different. We were simulating a market in which individual behavior

  [ xiv ] Preface

  competed and evolved in an “ecology” that these behaviors mutually created.

  This was something that couldn’t easily be done by standard equation-based

  methods—if forecasting rules were triggered by specific conditions and if they differed from investor to investor their implications would be too complicated to study. And it differed from other computerized rule-based models that

  had begun to appear from about 1986 onward. Their rules were few and were

  fixed—laid down in advance—and tested in competition with each other.

  Our rules could change, mutate, and indeed “get smart.” We had a definite

  feeling that the computer would free us from the simplifications of standard

  models or standard rule-based systems. Yet we did not think of our model as

  computer simulation of the market. We saw it as a lab experiment where we

  could set up a base case and systematically make small changes to explore

  their consequences.

  We didn’t quite have a name for this sort of work—at one stage we called

  it element-based modeling, as opposed to equation-based modeling. About

  three years later, in 1991, John Holland and John Miller wrote a paper about

  modeling with “artificial adaptive agents.”2 Within the economics community

  this label morphed into “agent-based modeling” and that name stuck. We took

  up other problems that first year of the Economics Program. Our idea was

  not to try to lay out a new general method for economics, as Samuelson and

  others had tried to do several decades before. Rather we would take known

  problems, the old chestnuts of economics, and redo them from our different

  perspective. John Rust and Richard Palmer were looking at the double auction

  market this way. David Lane and I were working on information contagion, an

  early version of social learning, using stochastic models. I had thought that

  ideas of increasing returns and positive feedbacks would define the first years of the program. But they didn’t. What really defined it, at least intrinsically, was John Holland’s ideas of adaptation and learning. I had also thought we

  were going slowly and not getting much done, but at the end of our first year, in August 1989, Kenneth Arrow told us that compared with the initial years

  of the Cowles Foundation effort in the 1950s, our project had ma
de faster

  progress and was better accepted.

  I left Santa Fe and returned to Stanford in 1990 and the program passed

  into other hands. It continued with various directors throughout the 1990s

  and the early 2000s with considerable success, delving into different themes

  depending on the directors’ interests and passing through periods of relative

  daring and relative orthodoxy. I returned to the Institute in 1995 and stayed

  with the Program for a further five years.

  Most of the economic papers in this volume come out of this first decade

  or so of SFI’s economics program. We published an early version of the stock

  2. J. H. Holland and J. H. Miller, “Artificial Adaptive Agents in Economic Theory,”

  Amer. Econ. Assoc. Papers and Proceedings, 81, 2, 365–370, 1991.

  Pr eface [ xv ]

  market paper in Physica A in 1992, and followed that with the version included here in 1997. The paper got considerable notice and went on to influence much

  further work on agent-based economics.

  One other paper that was highly noticed came out in 1994, and this was my

  El Farol paper (included in this volume as Chapter 2). The idea had occurred to me at a bar in Santa Fe, El Farol. There was Irish music on Thursday nights and if the bar was not too full it was enjoyable, if the bar was crowded it was much less so. It occurred to me that if everyone predicted that many would come

  on a given night, they would not come, negating that forecast; and if every-

  one predicted few would come they would come, negating that forecast too.

  Rational forecasts—rational expectations—would be self-negating. There was

  no way to form properly functioning rational expectations. I was curious about what artificial agents might make of this situation and in 1993 I programmed

  it up and wrote a paper on it. The paper appeared in the American Economic Review’s Papers and Proceedings, and economists didn’t know at first what to make of it. But it caught the eye of Per Bak, the physicist who had originated the idea of self-organized criticality. He started to fax it to colleagues, and suddenly El Farol was well known in physics. Three years later, a game-theoretic

  version of the problem was introduced by the physicists Damien Challet and

  Yi-Cheng Zhang of the University of Freiburg as the Minority Game.3 Now,

  several hundred papers later, both the Minority Game and El Farol have been

  heavily studied.

  In 1997 my ideas took off in a different direction, one that wasn’t directly

  related to Santa Fe’s economics program. I became deeply interested in tech-

  nology. The interest at first puzzled me. My early background was engineer-

  ing, but still, this fascination with technology seemed to have nothing to do

  with my main interests in either economics or complexity. The interest had in

  fact been kindled years before, when I was exploring the idea of technologies

  competing for adoption. I had noticed that technologies—all the technolo-

  gies I was looking at—had not come into being out of inspiration alone. They

  were all combinations of technologies that already existed. The laser printer

  had been put together from—was a combination of—a computer processor, a

  laser, and xerography: the processor would direct the laser to “paint” letters or images on a copier drum, and the rest was copying.

  I had realized something else as well. In 1992 I had been exploring jet

  engines out of curiosity and I wondered why they had started off so simple

  yet within two or three decades had become so complicated. I had been learn-

  ing C programming at the time, and it occurred to me that C programs were

  structured in basically the same way as jet engines, and as all technologies for 3. D. Challet and Y-C. Zhang, “Emergence of Cooperation and Organization in an Evolutionary Game,” Physica A 246: 407–418, 1997.

  [ xvi ] Preface

  that matter. They had a central functioning module, and other sub-modules hung off this to set it up properly and to manage it properly. Over time with a given technology, the central module could be squeezed to deliver more performance if sub-technologies were added to get past physical limits or to work around problems, and so a technology would start off simple, but would add

  pieces and parts as it evolved. I wrote an essay in Scientific American in 1993

  about why systems tended to elaborate.4

  Somehow in all this I felt there was something general to say about technol-

  ogy—a general theory of technology was possible. I had started to read widely

  on technology, and decided I would study and know very well several particu-

  lar technologies, somewhere between a dozen and twenty. In the end these

  included not just jet engines, but early radio, radar, steam engines, packet

  switching, the transistor, masers, computation, and even oddball “technolo-

  gies” such as penicillin. Much of this study I did in St. John’s College library in Santa Fe, some also in Xerox Parc where I was now working. I began to see

  common patterns emerging in how technologies had formed and come into

  being. They all captured and used phenomena: ultimately technologies are

  phenomena used for human purposes. And phenomena came along in fami-

  lies—the chemical ones, the electronic ones, the genomic ones—so that tech-

  nologies formed into groups: industrial chemistry, electronics, biotechnology.

  What became clear overall was that it wasn’t just that individual technolo-

  gies such as the jet engine evolved over their lifetimes. Technology—the whole collection of individual technologies—evolved in the sense that all technologies at any time, like all species, could trace a line of ancestry back to earlier technologies. But the base mechanism was not Darwinian. Novel technologies did not come into existence by the cumulation of small changes in earlier technologies: the jet engine certainly did not emerge from small changes in

  air piston engines. Novel technologies sprung from combining or integrating

  earlier technologies, albeit with human imagination and ingenuity. The result

  was a mechanism for evolution different from Darwin’s. I called it Evolution

  by Combination, or Combinatorial Evolution.

  This mechanism exists of course also in biological evolution. The major tran-

  sitions in evolution are mostly combinations. Unicellular organisms became

  multicellular organisms by combination, and prokaryotes became eukaryotes

  by combination. But the occurrence of such events is rare, every few hundred

  million years at best. The day-to-day evolutionary mechanism in biology is

  Darwinian accumulation of small changes and differential selection of these.

  By contrast, in technology the standard evolutionary mechanism is combina-

  tion, with Darwinian small changes following once a new technology exists.

  4. W. B. Arthur, “Why Do Things Become More Complex?” Scientific American, May 1993.

  Pr eface [ xvii ]

  I felt I now understood how technologies came into existence, and how the collection of technology evolved. I wanted to see if I could make such evolution work in the lab or on a computer. Around 2005 I was working at FXPAL,

  Fuji Xerox’s think tank in Palo Alto, and I had met the computer scientist

  Wolfgang Polak. Could we create a computer experiment in which a soup of

  primitive technologies could be combined at random and the resulting combi-

  nation—a potential new technology—tossed out if not useful but retained if

  useful and added to the soup for further combination? Would such a system

 
creating successive integrations in this way bootstrap its way from simplicity to sophistication? We experimented with several systems, to no avail. Then

  we came across a beautiful paper by Richard Lenski in Nature,5 where he and his colleagues had used the genetic algorithm to evolve digital circuits. Digital technologies seemed a natural medium to work in: if you combined two digital

  circuits you got another digital circuit; and the new circuit might do some-

  thing useful or it might not.

  Getting our experiment to work wasn’t easy, but after a couple of months

  Polak got the system running and it began to “create” novel circuits from

  simple ones. Beginning with a soup of simple 2-bit nand circuits, the basic building block in digital circuits, we could press the return button to start

  the experiment and examine what had been created 20 hours later. We found

  circuits of all kinds. Elementary ones had formed first, then ones of interme-

  diate complication such as a 4-bit equals, or 3-bit less than. By the end an 8-bit exclusive-or, 8-bit and, and an 8-bit adder had formed. Casually this may not seem that significant. But an 8-bit adder that works correctly (adding 8 bits of x to 8 bits of y to yield 9 bits for the result, z) is one of over 10177,554 circuits with 16 inputs and 9 outputs, and the chance of finding that randomly in

  250,000 steps is negligible. Our successive integration process, of combining

  primitive building blocks to yield useful simple building blocks, and combin-

  ing these again to create further building blocks, we realized was powerful.

  And actual technology had evolved in this way. It had bootstrapped its way

  from few technologies to many, and from primitive ones to highly complicated

  ones.

  We published our experiment in Complexity but strange to say it was little noticed or commented on. My guess is that it fell between cracks. It wasn’t

  biological evolution, it wasn’t the genetic algorithm, it wasn’t pure technol-

  ogy, and it wasn’t economics. And the experiment didn’t solve a particular

  problem. It yielded a toolbox or library of useful circuits, much like the library of useful functions that programming language designers provide. But it

  yielded this purely by evolution, and I found this a wonder. I have a degree in electrical engineering and Polak has one in computer science, but if you asked 5. Lenski, R., C. Ofria, R. Pennock, and C. Adami, “The Evolutionary Origin of Complex Features, Nature, 423, 139–443, 2003.