At first glance, I said, yeah, that's kinda cool. Neat.
And then, I started reading the comments (always, ALWAYS a mistake), and I discerned that the guy was claiming that this project used Genetic Programming, and it converged slowly.
I know a bit about Genetic Algorithms, and I know a little bit about Genetic Programming. They're both techniques for letting the computer search around in a high dimensional space, with some evaluation function (which need not be mathematically well-behaved, just something that can be evaluated by a computer). In the case of Genetic Programming, what you're doing is taking code and breeding smarter code which you then execute to generate results, which you run through your evaluation function. Genetic Algorithms are similar, but instead of code, you're working with the data itself.
In each case, you've got a population that you're working with, and you allow that population to reproduce, possibly with new elements coming from multiple parents (see "crossover"), possibly with a single parent (see "mutation").
Bear in mind that this is not meant to be an efficient means to finding a solution - if you've got a small enough search space, and a simple enough evaluation function, just go ahead and evaluate it over the entire space. If your space is bounded by linear constraints, and your evaluation function is linear or quadratic, there are mathematicians who would love to hook you up with a simplex algorithm.
Genetic techniques (GA and GP) instead are somewhat suitable for problems where the search space is large, and the function isn't easy to represent in a closed form mathematical equation. Case in point - I have the ability to draw a bunch of closed polylines, and I wish to use their contours to approximate The Mona Lisa. Computable, but not an easy optimization problem.
So, here's what made me frustrated - one of the early comments is "is this really a genetic approach? Isn't it just hill climbing?". The guy mentions on his FAQ that he has one child and one parent, for a population of size two, and he takes the best one. I am convinced that that describes hill climbing. Moreover, I looked at his code, and it looks like he's got a list of polygons as his genotype. Which, if he had a larger population size, might make it fit in the Genetic Algorithm description, instead of Genetic Programming. Says me. I'm not sure I call HTML or XML or other declarative representations "programming languages" or "code", but others disagree with me.
I spent some time putting together a quick version on my own, and I'm not entirely satisfied with what I came up with. I've wanted to do something like this for a while, and this was incentive to finally get around to it. This was also a distraction from a number of other projects, which is probably a good thing. A case in point, some of this fits in to the system I'm working on to write my own programming language / environment. You can imagine that having more CPUs working concurrently on a problem like this might explore more of the space faster. Now, if I can just find a way to take this problem and make it easy to do in my programming environment, I'll be set.
Anyway, after running my tool for a few hours on another recognizable face, I got this:
That was over 2000 generations, with a population of at least 100 (see below), a carryover of 20, and no crossover.
One thing that I did that I consider cheating was to do some exhaustive searching. Each iteration, I randomly choose something to do, which might include changing the color of a specific rectangle. The typical thing to do in genetic algorithms would be to take the parent, create a child which is in all respects identical to the parent, except for the color of this one rectangle, which would be a different random color. Perhaps the new color is slightly different in the red component. If that incremental change in redness makes it more "fit", then the child will survive better than the parent.
The cheat that I did, if it is a cheat, was to have the parent have 256 children, covering the gamut (so to speak) of all possible values of redness. There should be one that's better than the rest, and that one will get carried on into further generations. This is why I'm not surprised to see a pink rectangle on the upper left, and some purple in the upper right, and even something that might be his tie in the bottom center. You might be able to squint and tell who the picture is by the rectangles that form the hair. Or, perhaps, not. And so, because I'm evaluating all possible children from variations in a single dimension (here, redness of a given rectangle), my population size isn't a fixed size. I ensure that I have a minimum population size (100), and I select a subset of that to carry over to the next generation (the best 20), which seems a lot more like what I think of as Genetic Algorithms than what the original guy is doing.
I'm not entirely happy with my results. I expect that I should be able to get something far more recognizable within an hour or two. Part of my solution is to embrace the cheating and go further with it - in my approach, there are big rectangular regions that I shade with a single color. Rather than evolve the color, I could choose to evolve just the boundary of the region, and determine the optimal color for that region. Figuring out a best color for a region is simple enough. In general, what remains is a hard problem - partitioning an image into flat-shaded facets doesn't have an efficient solution.
More details if and when I do more work on this project.
ETA: I ran Alsing's tool on that same reference image, and got this after 3557 generations:
Or, at least, I think it's 3557 generations. Maybe more like four times that. Like I said, I disagree with his nomenclature, so I don't know if when he says "Generation" it means anything like what I mean. He goes through "generations" much faster than I do, so it was much quicker to generate that picture with his tool than my picture with mine.