Analysis of a BruteForce Shuffle
February 5, 2015I was reading a blog post on card shuffling that described a bruteforce card shuffling algorithm. The algorithm takes 52 cards labeled 1 to 52, in order. For each card, it throws a 52sided die and places the card into the deck at the position shown on the die, unless that position was used already. If the position is already taken, the die is thrown until an unused spot comes up:
This algorithm works and is relatively simple. In contrast, here is a much faster algorithm (FisherYates) for doing the same thing:
void fisher_yates_shuffle(int deck[static 52]) {
for (int i = 0; i < 52; i++) {
deck[i] = i + 1;
}
for (int i = 52  1; i > 0; i) {
int s = rand_n(i + 1);
int temp = deck[s];
deck[s] = deck[i];
deck[i] = temp;
}
}
It’s not a whole lot more complicated, although it might be harder to convince yourself that this always results in a properly shuffled deck.
In his post, the author says that the bruteforce algorithm is bad. His presentation made me wonder just how bad.
Calls to rand_n()
Getting a new random number is probably the most expensive part of these calculations, so let’s figure out how many times rand_n()
is called in each.
Bruteforce
For the bruteforce algorithm, the chances of getting a collision depend on how many of the slots in the deck are already full. Let’s say we have a deck of size \(n\) where \(m\) of the slots already have cards and \(nm\) slots are free. The expected number of tries should satisfy the following equation:
\[E(m, n) = 1 \cdot \frac{nm}{n} + (E(m, n) + 1)\cdot \frac{m}{n}\]Doing the algebra gives:
\[E(m, n) = \frac{n}{nm}\]We need to throw a sum around it to take into account all the different \(m\)s:
\[E(n) = \sum\limits_{m=0}^{n1} {\frac{n}{nm}} = n \sum\limits_{m=0}^{n1} {\frac{1}{nm}}\]By doing a substitution \(i = nm\) on \(\sum\limits_{m=0}^{n1} {\frac{1}{nm}}\), we get:^{1}
\[\begin{equation}\label{eq:expsimp} E(n) = n \sum\limits_{i=1}^{n} {\frac{1}{i}} \end{equation}\]Another way to see this is by noticing that the first adds together \(\frac{1}{n} + \frac{1}{n1} + \frac{1}{n2} + \cdots + \frac{1}{2} + \frac{1}{1}\), while the second does the same but in reverse. The summation above happens to be the \(n\)th harmonic number, and so in the limit, the whole expression is approximately equal to:
\[E(n) \approx n (\gamma + \ln(n))\]where \(\gamma\) is the EulerMascheroni constant. This puts the average case in \(\mathcal{O}(n\ln(n))\). In the worst case, there are infinite collisions, so the algorithm never terminates.
FisherYates
In FisherYates, the answer is simple.
rand_n()
is called \(n  1\) times, so in both the average and worst cases the number of calls is in \(\mathcal{O}(n)\).
Summary
Algorithm  Average Case  Worst Case 

BruteForce  \(\mathcal{O}(n\ln(n))\)  Unbounded 
FisherYates  \(\mathcal{O}(n)\)  \(\mathcal{O}(n)\) 
rand_n()
.Benchmark
The theory’s nice, but how does this work out in practice?
Average Case
First, I had both algorithms shuffle a smattering of deck sizes 10,000 times each. Here’s how long those shuffles took on average:
The bruteforce algorithm is definitely slower, but how much slower?
This graph shows the ratio of the bruteforce time over the FisherYates time. You can see the logarithmic difference pretty clearly.
Worst Case
To look at worst cases, I ran the bruteforce algorithm over a 52card deck 5,000,000 times. The graph below shows how the probability \(y\) that a shuffle finished in exactly \(x\) calls.
The mean (236.08) is close to the expected value (235.98, from Eq. \(\eqref{eq:expsimp}\) above).
However, it’s clear that there’s a big spread.
One of the runs took 953 calls, or more than 4 times the expected number.
Assuming a real random number generator, then for any \(k\) you choose, if you run the algorithm enough times, you would eventually find an execution calling rand_n()
more than \(k\) times.
This is clearly not an algorithm suitable for realtime purposes.
Conclusion
Okay, so the bruteforce algorithm isn’t great. It might not terminate, it runs significantly slower than the FisherYates algorithm, and it’s not a whole lot simpler. But it’s also not horrible—for deck sizes smaller than 70, it’s less than 5 times slower. This really surprised me. Naively, I thought that due to the random nature of the algorithm, the runtime might have an exponential or even factorial component. There’s probably very little reason^{2} to use the bruteforce algorithm over FisherYates, but \(\mathcal{O}(n\ln(n))\) ain’t that bad.
The code I used to help benchmark these algorithms can be found here.
After I wrote this post, I realized the bruteforce shuffle is related to the Coupon Collector’s Problem. The link provides a similar derivation for the expected value, as well as a calculation of the variance.

Thanks to Szabolcs Iván for giving me the idea behind this step. ↩

BF does perform half as many writes as FY, so perhaps if writes are really expensive… ↩