How genetic algorithms work.

As for a general genetic algorithm (GA), it's actually crazy simple but a lot of people have a hard time wrapping their brains around it because they overthink it

As for a general genetic algorithm (GA), it's actually crazy simple but a lot of people have a hard time wrapping their brains around it because they overthink it. I am passionate about this, so allow me to spend my break time telling you all about it.

Warning: WALL OF TEXT. Skip to the bottom for some links to fun demos you can play with in your browser.

Say you've got a problem. It's a complicated one, and one that you just can't work backwards to find the ideal solution. A common situation are input weights for a neural network, which is a whole different can of worms. The point is that you have a complex problem where the answer can be generalized to a single data set, but you have no idea what that data set should be. You probably don't even know where to start. This is where you use a GA.

Optional Context:

For the sake of example, let's say the problem is we have a neural AI designed to find food, avoid threats, and breed. Exactly how that's implemented is unimportant. All we need to know is that writing the AI is the easiest AI you'll ever write, because it takes input data, mashes it together with numbers from a list, and spits out new numbers. Defining that list of numbers is the real problem with neural AI, since working backwards is only possible for very simple designs, and even then it is complicated.

Actual GA:

To put a point on the above, the solution is a list of 5 numbers between 0 and 255. For example, it could be any of these:

0     0    37

56    0    20

94    0    39

26    0    86

4     0    255

Each of those 3 lists are genomes. A list of genes.Either of them could be the ideal solution. Unfortunately, for a list of 5 numbers between 0 and 255 there's over 1,000,000,000,000 combinations and only one is the best. We may never find that one, but the point of a GA is to keep finding slightly better solutions until one is either perfect or good enough.

So, how does a GA work? We start by generating a population, also called a gene pool. Here's the population we'll use to start:

0     37    3

56    20    38

94    39    96

26    86    62

4     255   26

It's a tiny population. In practice, there'd be a population of 100, maybe even 1,000. I don't have time to type that much so our population size is 3 for now.

The next step in the GA is to evaluate the population and assign each genome a fitness score. In this case, a higher score is better. How do you evaluate fitness? It depends completely on the application. For the context I gave above, we'd plug each genome into the AI, run it, and score the AI's performance, probably based on how long it survives. We'll say that's scored on a scale of 0 to 100.

0     37    3

56    20    38

94    39    96

26    86    62

4     255   26


7     9      2      Fitness

Pretty crappy scores, right? That's expected. Those were completely random genomes and the chances of them doing well were pretty tiny. The important thing is that we can rank them in terms of fitness. This is important because the next step is to remove the lowest performers from the population, because they suck. Fuck those losers, they should have tried harder. How many do we eliminate? This depends yet again on the application. I'm going to say that I wipe out all but the best 2. This culling marks the end of the first generation.

So my population is culled down to 2. Where does that get us? Well now we refill the population with new genomes, based on the best genomes of the past generation. How do we do this? There's a couple options and typically a mixture is used.

  • New random genomes: Simple enough. We replenish the population with completely random genomes. This has the benefit of introducing new patterns and greater variety, but it diminishes the benefit from previous generations.

  • Recombine the best of the previous generation: Think sexual reproduction. Two genomes get mixed together to create a new, unique genome that carries some genes from one parent and some genes from another. You can even get freaky and mix more than 2 genomes together. This has the benefit of possibly combining the strengths of the previous generation's best and brightest, but it also means little or no variety gets added with each generation.

In my experience, it's best to fill most of the population with recombined genomes, and then toss in some randoms to see what shakes loose. Every now and then, especially in the earliest generations, a random will do better than the existing, and the population takes a leap forward.

Regardless of how you replenish the population, there's a few more things to do to the new genomes before they can be evaluated. This is the time for mutation. Exactly as you might expect, we randomly mix up a couple genes to introduce some new variety. Typically, you read through each genome and roll a dice for each gene, so there's a random probability of each gene getting mutated. This probability is the mutation rate, and it should stay low. If you mutate too much, you've basically randomized the genome, so a mutation rate of 1% is a reasonable value, but the exact value again depends on your application. There's several kinds of mutation.

  • Totally random gene: Simple. Just pick a new, completely random gene to replace the existing one.

  • Swap: Pick two genes and switch them. Perhaps those gene values are good, but they are in the wrong spot.

  • Transpose: Like swap, but bigger. You cut a sequence of genes from one part of the genome and paste it someplace else in the genome.

  • Nudge: Okay, I made that up. I don't know a formal term for this, but I use it and it works pretty well. Randomly increase or decrease a gene value slightly.

So you've got your new population, right? There one more thing to consider. Do you keep the winners of the last generation in the population? That's elitism, and sometimes it is good and sometimes it is bad. I favor it, personally, since it is entirely possible that every new genome in the next generation will be worse than the best of the previous generation. With elitism, it is possible for a single genome to just never die. Unlikely, but possible. Maybe you hit that 1 in 1,000,000,000 chance of it being the ideal solution on the first go!

So, now you have your new population. Evaluate it, rank the genomes, cull, and repopulate. If your evaluation and genome interpretation are valid (that's the tricky part and where Biogenesis and OP struggle), you'll see those fitness score increasing with every generation. Eventually, your fitness scores will probably plateau, which is to say they will go many generations without any improvement at all. This isn't a big deal, and it happens to everyone eventually. Usually it means that the best possible solution has been found, given the bias of the genome design and evaluation process. That's right, your implementation can and will bias the results in some unpredictable fashion. Minimizing the bias is the goal. Now, sometimes a plateau will happen while your generations are still doing crap. If your implementation does allow for a better solution (which it might), you can probably bust up the plateau with a major event. Now, this isn't something I learned in school, just from experience. I've disagreed with some genuine experts of the field regarding this part, and they are no doubt right. But my ways work and an extra tool is always good to have, right?

  • Mass extinction: Dinosaurs weren't the best thing to evolve, right? Just the best thing at the time. Wiping out the dominant majority makes plenty of room for fresh variety. Hopefully some of that variety will be an improvement. For our example, I'd remove the whole population except the #1 current best, and generate randoms for the rest of the population. Those old elite genes will still be there, but now they are mixing with something fresh. Sometimes it helps, sometimes it doesn't.

  • Mass mutation: A massive spike in the mutation rate for a couple generations. Entirely unique species of bacteria and insect have been observed living close to the Chernobyl reactor. The radiation mutates them, and some of those mutations have helped those critters reproduce so the mutations stick around. This usually works, though sometimes it can hurt.

  • Bring back the dinosaurs: Mammals are superior to dinosaurs, and that's why we outlived them, right? Drop a T-Rex into Times Square and see how superior humans are. My money would be on 20-1, T-Rex's favor. If you don't use elitism, it is possible that good genomes will be lost to time and later generations will have a lower average fitness. I like to save good performers and reintroduce them to later generations. Sometimes the old genome is wiped out in one generation, but usually it sticks around long enough to reintroduce those good genes.