Based on the experience of numerous researchers over many years, it appears that GP and other evolutionary computation methods have been especially productive in areas having some or all of the following properties:
The interrelationships among the relevant variables is unknown or poorly understood (or where it is suspected that the current understanding may possibly be wrong). One of the particular values of GP (and other evolutionary algorithms) is in exploring poorly understood domains. If the problem domain is well understood, there may well be analytical tools that will provide quality solutions without the uncertainty inherent in a stochastic search process such as GP. GP, on the other hand, has proved successful where the application is new or otherwise not well understood. It can help discover which variables and operations are important; provide novel solutions to individual problems; unveil unexpected relationships among variables; and, sometimes GP can discover new concepts that can then be applied in a wide variety of circumstances.
Finding the size and shape of the ultimate solution is a major part of the problem. If the form of the solution is known, then alternative search mechanisms that work on fixed size representations (e.g., genetic algorithms) may be more efficient because they won't have to discover the size and shape of the solution.
Significant amounts of test data are available in computer-readable form. GP (and most other machine learning and search techniques) benefit from having significant pools of test data. At a minimum there needs to be enough data to allow the system to learn the salient features, while leaving enough at the end to use for validation and over-fitting tests. It is also useful if the test data are as clean and accurate as possible. GP is capable of dealing gracefully with certain amounts of noise in the data (especially if steps are taken to reduce over-fitting), but cleaner data make the learning process easier for any system, GP included.
There are good simulators to test the performance of tentative solutions to a problem, but poor methods to directly obtain good solutions. In many domains of science and engineering, simulators and analysis tools have been constructed that allow one to evaluate the behaviour and performance of complex artifacts such as aircraft, antennas, electronic circuits, control systems, optical systems, games, etc. These simulators contain enormous amounts of knowledge of the domain and have often required several years to create. These tools solve the so-called direct problem of working out the behaviour of a solution or tentative solution to a problem, given the solution itself. However, the knowledge stored in such systems cannot be easily used to solve the inverse problem of designing an artifact from a set of functional or performance requirements. A great advantage of GP is that it is able to connect to simulators and analysis tools and to "data-mine" the simulator to solve the inverse problem automatically. That is, the user need not specify (or know) much about the form of the eventual solution before starting.
Conventional mathematical analysis does not, or cannot, provide analytic solutions. If there is a good exact analytic solution, one probably wants to use it rather than spend the energy to evolve what is likely to be an approximate solution. That said, GP might still be a valuable option if the analytic solutions have undesirable properties (e.g., unacceptable run times for large instances), or are based on assumptions that don't apply in one's circumstances (e.g., noise-free data).
An approximate solution is acceptable (or is the only result that is ever likely to be obtained). Evolution in general, and GP in particular, is typically about being "good enough" rather than "the best". (A rabbit doesn't have to be the fastest animal in the world: it just has to be fast enough to escape that particular fox.) As a result, evolutionary algorithms tend to work best in domains where close approximations are both possible and acceptable.
Small improvements in performance are routinely measured (or easily measurable) and highly prized. Technological efforts tend to concentrate in areas of high economic importance. In these domains, the state of the art tends to be fairly advanced, and, so, it is difficult to improve over existing solutions. However, in these same domains small improvements can be extremely valuable. GP can sometimes discover small, but valuable, relationships.
Two (of many) examples of successful applications of GP that satisfy many of these properties are the work of Lohn, Hornby, and Linden (2004) on satellite antenna design and Spector's evolution of new quantum computing algorithms that out-performed all previous approaches (Spector, Barnum, and Bernstein, 1998; Spector, Barnum, Bernstein, and Swamy, 1999). Both of these domains are complex, without analytic solutions, yet in both cases good simulators existed which could be used to evaluate the fitness of solutions. In other words, people didn't know how to solve the problems but they could (automatically) recognise a good solution when they saw one. Both of these applications resulted in the discovery of highly successful and unexpected designs. The key component of the evolved quantum algorithm could in fact be extracted and applied in a wide variety of other settings, leading to major improvements in a number of related quantum algorithms as well as the ones under specific study.