The construction of any highly complex object or individual, whether an oak tree or an airliner, typically uses hierarchical, modular structures to manage and organise that complexity. Animals develop in a highly regular way that yields a hierarchical structure of components ranging from systems and organs down to cells and organelles. GP, as described so far, is typically used to evolve expressions that, while being suitable solutions to many problems, rarely exhibit any large-scale modular structure.
Given the pervasiveness of hierarchical structure as an organisational tool in both biology and engineering, it seems likely that such modular structure could be valuable in genetic programming as well. Consequently, this has been a subject of study from the early days of genetic programming. For example, Angeline and Pollack (1992) created dynamic libraries of subtrees taken from parts of fit GP trees. Special mutation operations allowed the GP population to share code by referring to the same code within the library. Subsequently, Angeline suggested that the scheme's advantages lay in allowing GP individuals to access far more code than they actually "held" within themselves, rather than principally in developing more modular code. Rosca and Ballard (1996a) used a similar scheme, but were able to use much more information from the fitness function to guide the selection of the code to be inserted into the library and its subsequent use by members of the GP population. Olsson ( 1999, 1995) later developed an abstraction operator for use in his ADATE system, where sub-functions (anonymous lambda expressions) were automatically extracted. Unlike Angeline's library approach, Olsson's modules remained attached to the individual they were extracted from.
Koza's automatically defined functions (ADFs) (Koza, 1994) remain the most widely used method of evolving reusable components and have been used successfully in a variety of settings. Basic ADFs (covered in Section 6.1.1 ) use a fixed architecture specified in advance by the user. Koza later extended this using architecture altering operations (Section 6.1.2 ), which allow the architecture to evolve along with the programs.
6.1.1 Automatically Defined Functions
Human programmers organise sequences of repeated steps into reusable components such as subroutines, functions and classes. They then repeatedly invoke these components, typically with different inputs. Reuse eliminates the need to "reinvent the wheel" every time a particular sequence of steps is needed. Reuse also makes it possible to exploit a problem's modularities, symmetries and regularities (thereby potentially accelerate the problem-solving process). This can be taken further, as programmers typically organise these components into hierarchies in which top level components call lower level ones, which call still lower levels, etc. Koza's ADFs provide a mechanism by which the evolutionary process can evolve these kinds of potentially reusable components. We will review the basic concepts here, but ADFs are discussed in great detail in (Koza, 1994).
When ADFs are used, a program consists of multiple components. These typically consist of one or more function-defining branches (i.e., ADFs), as well as one or more main result-producing branches (the RPB), as illustrated in the example in Figure 6.1 . The RPB is the "main" program that is executed when the individual is evaluated. It can, however, call the ADFs, which can in turn potentially call each other. A single ADF may be called multiple times by the same RPB, or by a combination of the RPB and other ADFs, allowing the logic that evolution has assembled in that ADF to be re-used in different contexts.
Consider, for example, the following individual consisting of a result-producing branch and a single ADF:
It is important to not be fooled by a tidy example like this. ADFs evolved in real applications are typically complex and can be very difficult to understand. Further, simply including ADFs provides no guarantee of modular re-use. As is discussed in Chapter 13 , there are no silver bullets. It may be that the RPB never calls an ADF or only calls it once. It is also common for an ADF to not actually encapsulate any significant logic. For example, an ADF might be as simple as a single terminal, in which case it is essentially just providing a new name for that terminal.
In Koza's approach, each ADF is attached (as a branch) to a specific individual in the population. This is in contrast to both Angeline's and Rosca's systems mentioned above, both of which have general pools of modules or components which are shared across the population. Sometimes recursion is allowed in ADFs, but this frequently leads to infinite computations. Typically, recursion is prevented by imposing an order on the ADFs within an individual and by restricting calls so that ADFi can only call ADFj if i < j.
In the presence of ADFs, recombination operators are typically constrained to respect the larger structure. That is, during crossover, a subtree from ADFi can only be swapped with a subtree from another individual's ADFi.
The program's result-producing branch and its ADFs typically have different function and terminal sets. For example, the terminal set for ADFs usually include arguments, such as arg0, arg1. Typically the user must decide in advance the primitive sets, the number of ADFs and any call restrictions to prevent recursion. However, these choices can be evolved using the architecture-altering operations described in Section 6.1.2 .
Koza also proposed other types of automatically evolved program components (Koza, Andre, Bennet, and Keane, 1999). Automatically defined iterations (ADIs), automatically defined loops (ADLs) and automatically defined recursions (ADRs) provide means to reuse code. Automatically defined stores (ADSs) provide means to reuse the result of executing code.
6.1.2 Program Architecture and Architecture-Altering Operations
Koza (1994) defined the architecture of a program to be the total number of trees, the type of each tree (e.g., RPB, ADF, ADI, ADL, ADR, or ADS), the number of arguments (if any) possessed by each tree, and, finally, if there is more than one tree, the nature of the hierarchical references (if any) allowed among the trees (e.g., whether ADF1 can call ADF2).
There are three ways to determine the architecture of the computer programs that will be evolved:
Koza and his colleagues have used these architecture altering operations quite widely in their genetic design work, where they evolve GP trees that encode a collection of developmental operations that, when interpreted, generate a complex structure like a circuit or an optical system (see, for example, Section 12.3 , page 258 ).
The idea of architecture altering operations was extended to the extremely general Genetic Programming Problem Solver (GPPS), which is described in detail in (Koza et al., 1999, part 4). This is an open ended system which combines a small set of basic vector-based primitives with the architecture altering operations in a way that can, in theory, solve a wide range of problems with almost no input required from the user other than the fitness function. The problem is that this open-ended system needs a very carefully constructed fitness function to guide it to a viable solution, an enormous amount of computational effort, or both. As a result it is currently an idea of more conceptual than practical value.