A variety of other systems have been proposed which combine the use of grammars and probabilities. We mention only a few here; a more extended review of these is available in (Shan, McKay, Essam, and Abbass, 2006).
Ratle and Sebag (2001) used a stochastic context-free grammar to generate program trees. The probability of applying each rewrite rule was adapted using a standard EDA approach so as to increase the likelihood of using successful rules. The system could also be run in a mode where rule probabilities depended upon the depth of the non-terminal symbol to which a rewrite rule was applied, thereby providing a higher degree of flexibility.
The approach taken in program evolution with explicit learning (PEEL) (Shan, McKay, Abbass, and Essam, 2003) was slightly more general. PEEL used a probabilistic L-system where rewrite rules were both depth- and location-dependent. The probabilities with which rules were applied were adapted by an ant colony optimisation (ACO) algorithm (Dorigo and Stützle, 2004). Another feature of PEEL was that the L-system's rules could be automatically refined via splitting and specialisation.
Other programming systems based on probabilistic grammars which are optimised via ant systems include ant-TAG (Abbass, Hoai, and McKay, 2002; Shan, Abbass, McKay, and Essam, 2002), which uses a tree-adjunct grammar as its main representation, and generalised ant programming (GAP) (Keber and Schuster, 2002), which is based on a context-free grammar. Other systems which learn and use probabilistic grammars include grammar model based program evolution (GMPE) (Shan, McKay, Baxter, Abbass, Essam, and Hoai, 2004), the system described in (Bosman and de Jong, 2004a, b) and Baysian automatic programming (BAP) (Regolin and Pozo, 2005).