Magic!

First of all, Welcome to Humbug! This first post will not be tackling any woo, but it will be dealing with something I think is really cool. Magic.

Magicians are often skeptics, a tradition started by Houdini, one of the finests illusionists ever to astound us. He devoted a large portion of his time to countering the bogus, silly, and oftentimes harmful claims of so-called psychics and media. Houdini was an awesome guy. After his death, many more illusionists and magicians took up the same cause, notable among them are men like James Randi, Penn and Teller, and others. Today though, I want to talk about a particularly amazing trick done by Derren Brown, on his television show “Mind Control.”

In general, I know it’s bad form to tell how another magician does his tricks, but this particular trick is less tricksy and more science, so I feel justified in telling you how it’s done. More accurately, how I think it’s done. The trick in particular I’m speaking of was done on his first episode of “Mind Control.” It was, in my opinion, the most clever application of genetic algorithms I have seen. The trick was as follows:

A conductor is presented to an ochestra, Derren has the conductor write down the name of some peice of music on a peice of paper, so that neither Derren nor the orchestra know what the song is. Derren then instructs the orchestra to play quiet noise, and try to harmonize with whatever music they hear; that is, to “let something come out.” Derren now ties the conductors hands behind his back, and instructs him to think very intensely about the peice of music, and that he will transmit that thought to the orchestra, who will then begin to play it. All without speaking.

Here is a video of the trick. I’ll explain how to do it shortly.

Before I explain how it works, I’m going to take a somewhat mathematical approach to the subject. I’m going to list premises, axioms if you will, about the trick, these are my fundamental assumptions about the trick. This is useful, because it gives us an easy way to tell if my analysis is correct or not- if we can show that one of my axioms is false or incomplete, then obviously my analysis is broken. So, here are the axioms:

>

Axiom: Description
1 The musicians (including the conductor) are very good at playing music. That is, they are capable of determining what key they are in, what key others are in, and how to transpose music from one key to another on the fly.
2 People (in general) respond to approval from authority in a positive way.
3 Mind reading, Psychic powers, Group telepathy, and any other form of magical woo was not used.
4 People want to be fooled.

Now, the explaination. Derren’s first step, after he instructs the conductor to write down the song, is to tell the orchestra to play. He knows, (because of axiom 1) that the musicians will natural migrate to play something that sounds good. This means they’ll likely play something in the same key- same tempo, etc; so Derren already has that working for him. Further, he knows that the conductor — an authority figure for the orchestra — knows what he wants to hear. In tying the conductors hands behind his back, Derren forces the conductor to only use positive reinforcment to encourage the “correct” song to come out of the orchestra. Notice, at one point, the conductor is feverishly encouraging one of the violinists. Since we know, by axiom 3, that Derren isn’t psychicly telling her to play anything, we can conclude from axiom 2 that her reaction to positive reinforcement is to play louder, with more emphasis. From axioms 1 and 4, we know that the other musicians will naturally follow suit, since they know she is getting praised for her play, and they too want to be praised for theirs. The result is that one particular theme gets plucked out. The rest falls from the fact that the musicians know what to play.

Shortly after the conductor is freed, he mentions that he wanted the key to be “D major” and asks the orchestra what key he was in, they respond with, of course, “D major.” How did that happen? Simply enough- he chose the violinist who was playing in the key he wanted, because he heard her playing in the key he wanted her to play in, this results in the terribly simple result that since he could hear her playing what he wanted, the rest of the orchestra followed suit.

The net effect is basicly caused by a kind of simulated evolution– the conductor choose the “fittest” sound among the group, and appropriately encouraged those musicians to keep playing, and play louder. In computer science, we use this concept in whats called a “genetic” algorithm. In fact, genetic algorithms are a wide and varied field in computer science. Encompassing not only this concept of “simulated evolution,” which is often use to solve puzzles like the 8-queens puzzle, or sudoku; but also ideas of “simulated annealing” and other approximations of these ideas.

Specifically, simulated evolution, typically just called a genetic algorithm, is done as follows. I warn, this is not a field in which I am intimately familiar, so the definition will be rough. Essentially, a genetic algorithm consists of a group of initial “gene sequences” or “subjects.” These subjects are then weighed based on a fitness function. Which returns a “fitness” value, a number which indicates how close to the correct solution the subject is. Finally, we create a new population of subjects through the reproduction function. Which takes a list of all subjects along with their respective weights and produces a new list of subjects. This is typically the hardest part to create in the algorithm. The function consists of two parts: the “genetic influence” factor, and the subject combiner. Ideally, those subjects with higher fitness values should, according to natural selection, have more genetic representation (read: children) in the next generation. Contrary to this, we need to avoid having a genetic group which does not have enough genetic diversity to continue evolving, that is, we can’t give the fittest group to much representation, but we need to make sure we give it enough. There is some debate over the best way to calculate this “genetic influence” factor, suffice to say its hard and deserves a full post of it’s own, someday. The somewhat simpler — in that there is typically less debate over it — problem is that of recombination of genes. In something like the N-queens problem, combining two “subjects” which are usually vectors of length N, is generally done by preserving the elements of the vector which are the same in both subjects, and (weightedly, with preference to the more fit subject) randomly choosing the non-equal to create some static number of new vectors. Consider the two subjects for the 4-queens problem where the nth element’s value determines it position in column n of the board (indexing from 1): and . Further, assume the former is more fit than the latter. The totally preserved “genes” in this recombination are: , from that we might generate values of ,,… If we choose to always generate twice the number of subjects being combined, and then prune out the half with the lowest fitness values entirely, we can keep a static number of subjects to recombine each round, and after several rounds, we are highly likely to find a solution. In fact, even if we don’t find a full solution, genetic algorithms can often give a partial solution, by preserving those genes with make everyone highly fit.

There are, however, downsides to genetic algorithms. Fundamentally, any sufficiently poorly written genetic algorithm degrades to a very slow random walk over the “search space.” Even the very good ones, if they are given (or generate) bad initial subject populations, can be reduced to sitting in a local optima in the search space. What is a search space? You ask? Well, If you imagine that the fitness function rates any “solution” on the range of reals (0,1); with f(x) = 1 iff x is the optimum solution, then the “seach space” is the graph of the fitness function over all possible inputs. When we “evolve” our answer, we are in fact taking a kind of directed walk along this curve. With the fitness function defined as above we are looking for maxima in that search space. That is, ideally, we want to find a point f(x) such that f(x) > f(y) for all other solutions y. There are, however, two problems, both stemming from a lack of diversity.

First of all, we can get stuck, since we’re “walking” through this curve, we will occasionally run into a local maxima, where we have to go down in overall fitness to achieve greater fitness later. That is, we have to take one step back, to take two steps forward. The problem with this is that the genetic algorithm is designed to bubble up, not down. This is why a naive genetic algorithm will sometimes only reply with partial answers, it really is the best it can do. The way to fix this is fairly simple, inject new genetic material into the population. If we, for instance. Find ourselves at a local maximum, we simply cut out some portion of the weakest in the population, and fill it with new random subjects. We could, also, simply report that solution and start from scratch. Typically, an approach somewhere in between is used. If you’re algorithm has the benefit of being able to tell when it is at a local maxima which is not (one of, in some cases) the global maxima[1]. You may opt to report the maximum found, then take the first approach, and see if you find a new solution. later, if you find you are arbitrarily close to the same maximum, you may decide to restart your algorithm completely.

The latter problem is one I like to call “creeping evolution.” This occurs when the gene pool is diverse enough to change, but that change is getting slower and slower with each generation. This can happen when you approach a maximum. The solution is similar to the former, inject new material, though typically less than the former case.

So there you have it, magic and genetic algorithms, what more could you ask for? I hope you enjoyed the inaugural post. Some links on topics in this post are posted at the bottom.

[1] Most algorithms don’t have this wonderful ability.

Links:

Wikipedia: Genetic Algorithms
Wikipedia: Evolutionary Algorithms
Wikipedia: Derren Brown
Genetic Algorithms: Introduction with Java Applets
Genetic Algorithm & Genetic Programming

Advertisements

~ by jfredett on November 9, 2007.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: