When we (as in DARPA researcher's and I) started to develop (The real knight Rider Project).
back in 2000, one of our first projects was to program the Hal2000 using X10 program's in to a "Car" to follow a line on the floor. The basic strategy was to make the car weave back (like a "Z") and forth across the line, making a little forward progress. First, the car veered ahead and to the right, until it lost sight of the line. Then it veered ahead and to the left, until it again lost sight of the line. Then it started back to the right, and so on. This behavior can be represented by two simple rules: (1) if you are veering to the left and you lose sight of the line on the rode, then begin to veer right, (2) if you are veering to the right and you lose sight of the line on the rode, then begin to veer left.
Imagine that the car is programmed with 2 rules: (Like IF-THEN)(1) move forward when you detect a green light, (2) move slower when you are in the dark. When this car is released in the environment, it exhibits a type of emergent behavior: it seeks out the edge of a shadow, then happily oscillates around the shadow edge. The car can be viewed as an "Edge-Finding A.I. Creature." This edge-finding capability is not explicitly represented in Hal's two rules. Rather, it emerges from interactions with human being's of those rules with specific structures in the environment.
We tried my Hal2000/Alan/Hal mex program, and Hal followed the line perfectly. But as Hal approached the end of the line, we realized that we hadn't "programmed in" any rules for what to do at the end of the line. We didn't know what the Car would do. We were pleased with the behavior that emerged: the car turned all the way around and started heading back down the line in the other direction. This "end-of-line" behavior was not explicitly programmed into the car. Rather it emerged from the interactions between Human's and Hal's rules made by me and the unfamiliar environment at the end of the line.
By the way, I named this car Linda.