Welcome to The Riddler. Every week over the past eight years, we have offered up problems related to the things we hold dear around here: math, logic and probability. Usually, two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer, win , I need to receive your correct answer before 11:59 p.m. Eastern time on Monday. Have a great weekend!</p>
</p>">^{1} and you may get a shoutout. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

The time has come. This is indeed the final column for The Riddler here at FiveThirtyEight. On behalf of myself and Oliver Roeder, as well as current and former FiveThirtyEight staff (in particular, my editors: Chadwick Matlin, Santul Nerkar and Maya Sweedler), I want to thank *you*, dear readers, for all your riddle ideas, solutions and everything in between. Writing and solving math puzzles is good fun — but what’s even better has been taking part in this awesome puzzling *community*. Thank you, Riddler Nation!

That said, this isn’t *quite* the end.

The puzzling will continue on Substack via my brand-new newsletter, Fiddler on the Proof, affectionately known as “The Fiddler.” The first puzzle (as well as the solution to this week’s riddle) will drop on July 7. Sign up (for free!) to keep the puzzle goodness going!

As a graduate student in the Riddler Research Lab, I have been tasked with aligning a laser beam inside a perfectly circular mirror. The beam starts at one point on the circumference of the circle and bounces around inside the circle many times before returning to that very same point.

Upon further analysis, I find that the beam creates a 184-gon (i.e., a polygon with 184 sides) inscribed in the circle — but not a *regular *184-gon, mind you. The head of the lab has asked me to further measure the angles the beam makes as it bounces around, which honestly seems like a lot of busywork. To lighten the load, I grab a labmate and we decide to split the work in half. We each will measure *every other *angle. That is, if the points around the polygon are labeled ABCDEFGH …, I will measure angles ABC, CDE, EFG, etc., while my labmate will measure BCD, DEF, FGH, etc.

In total, I’m responsible for measuring 92 angles. I find that the first 89 angles each measure precisely 178 degrees. What is the sum of the final three angles I’m supposed to measure?

*Important note:*** **As this is the final column for The Riddler, the solution to this puzzle will appear next Friday morning at thefiddler.substack.com.

Congratulations to Tom Hanrahan of Lexington Park, Maryland, winner of last week’s Riddler and the *final* ruler of Riddler Nation!

Last week was the eighth and final Battle for Riddler Nation, and things were a little different this time around.

In a distant, war-torn land, there were 10 castles. There were two warlords: you and your archenemy. Each castle had its own strategic value for a would-be conqueror. Specifically, the castles were worth 1, 2, 3, … , 9 and 10 victory points. You and your enemy each had 100 soldiers to distribute, any way you liked, to fight at any of the 10 castles. Whoever sent more soldiers to a given castle conquered that castle and won its points. If you each sent the same number of troops, you split the points. You don’t know what distribution of forces your enemy has chosen until the battles begin. Whoever won the most points won the war.

As in previous years, I adjudicated all the possible one-on-one matchups. A victory was worth one “victory point,” while a tie was worth 0.5 victory points. Instead of declaring the winner after this round robin, I eliminated the team with the fewest victory points and repeated the entire process with one fewer competitor.^{2}

This year, I received 394 strategies. After eliminating duplicate submissions and unfair strategies (e.g., placing more than 100 total soldiers or effectively placing more than 100 by allocating negative numbers to some castles), 364 strategies remained.

As always, I generated a heat map (with darker orange representing more soldiers), organized by how well each approach fared in the initial round robin, shown on the left below.

With the initial round robin out of the way, the elimination rounds began. One round at a time, I eliminated the weakest-performing strategy in a round robin of the remaining teams, until only one warlord ruled supreme. A similar heat map showing these final rankings appears above on the right.

A few trends are apparent at the tops of these heat maps. In the initial round robin, the strongest warlords clustered their soldiers in castles worth 8, 6, 5, 4, 3 and 2 points. This was one of many ways to earn a total of 28 points, which was sufficient to guarantee a victory (there were 55 total points at stake, so 28 was more than half). In prior years, similar strategies have prevailed, such as clustering in castles worth 10, 9, 5 and 4 points, or in castles worth 10, 9, 6 and 3 points.

That said, these heat maps can be difficult to interpret. Friend-of-The-Riddler Vince Vatter (also a former ruler around here) did an impressive job reconstructing the strategies from the seventh Battle for Riddler Nation by analyzing a similar heat map from last year. (Cheers to Vince for using all the resources at his disposal!)

So let’s take a deeper dive into the strategies that stood atop the final rankings:

The top strategies in FiveThirtyEight’s Final Battle for Riddler Nation, with their distribution of soldiers for each castle

Final Rank | Initial Rank | Name | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 48 | Tom Hanrahan | 3 | 3 | 3 | 2 | 3 | 25 | 7 | 7 | 38 | 9 |

2 | 74 | Matthieu | 0 | 0 | 0 | 0 | 21 | 23 | 0 | 27 | 29 | 0 |

3 | 72 | JD | 0 | 0 | 0 | 16 | 0 | 0 | 28 | 28 | 28 | 0 |

4 | 269 | Tabitha Torgersen | 0 | 4 | 6 | 1 | 7 | 8 | 11 | 11 | 11 | 41 |

5 | 187 | David Kuplic | 3 | 3 | 11 | 15 | 14 | 5 | 9 | 9 | 19 | 12 |

6 | 238 | Daniel V | 1 | 3 | 1 | 11 | 11 | 14 | 15 | 2 | 0 | 42 |

7 | 7 | Ben Knox | 0 | 1 | 1 | 14 | 19 | 1 | 2 | 2 | 35 | 25 |

8 | 10 | Michael DeHaye | 0 | 0 | 11 | 1 | 1 | 24 | 2 | 3 | 34 | 24 |

9 | 5 | Martin Stearne | 0 | 0 | 12 | 1 | 1 | 23 | 1 | 1 | 33 | 28 |

10 | 219 | Tom F | 0 | 0 | 1 | 3 | 6 | 11 | 26 | 23 | 15 | 15 |

This time around, a few “conventional” strategies (which won prior battles) did fairly well. Ben Knox made a play for castles 4, 5, 9 and 10, while Michael DeHaye and Martin Stearne both went for castles 3, 6, 9 and 10. The three of them did well in the initial round robin and ultimately came in seventh, eighth and ninth. Meanwhile, Matthieu and JD (who came in second and third, respectively) still targeted castles worth 28 points in total, but in combinations that didn’t fare as well in prior battles.

But, in the end, many of the best strategies didn’t do quite so well at the beginning. To win in this new format, you had to make it through every elimination round, even if it wasn’t always pretty. More specifically, you wanted a strategy that merely *survived* against the weaker strategies early on, but that was strong against the better strategies later on.

And so, at the very top, Tom Hanrahan only made a strong play for castles 6 and 9 (worth a combined 15 points), while leaving a smattering of soldiers at the remaining eight castles. Meanwhile, Tabitha Togersen, who came in fourth, only made a strong play for castle 10. And David Kuplic, who came in fifth, spread soldiers all over the place, with less than 20 at every castle.

While this may have been the Final Battle for Riddler Nation, it was refreshing to see a new set of strategies rise to the top. Indeed, in this format, none of the final top six warlords had placed in the top 40 after the initial round robin. Here’s a graph showing how each warlord’s initial and final rankings compared:

In the top right, a *really* bad strategy (e.g., putting 100 soldiers at castle 1) was bad no matter the format of the battle. But beyond those low performers, things got noisy. Put another way, the correlation between initial and final rank was surprisingly low (to me, at least).

Looking at the bottom right of this graph, I have to award special kudos to “John The Warload Winner” (I presume there’s a typo in there), who was the most improved between the initial and final rankings. In the initial round robin, John ranked 306th. But John was a scrappy warlord, avoiding hundreds of eliminations to ultimately place 16th in the final standings.

Congratulations to the winners, and to everyone who participated in this memorable, final edition of the Battle for Riddler Nation!

Well, aren’t you lucky? While this may be the final column of The Riddler here at FiveThirtyEight, the puzzling continues over at Fiddler on the Proof!

Email Zach Wissner-Gross at thefiddler@substack.com.

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either, win , I need to receive your correct answer before 11:59 p.m. Eastern time on Monday. Have a great weekend!</p>
</p>">^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

Friday, June 30, will mark the final column for The Riddler here at FiveThirtyEight. But this isn’t *quite* the end.

I am pleased to announce that the mathematical puzzling will continue on Substack, beginning July 7. You can tune into my forthcoming newsletter, Fiddler on the Proof, affectionately known as “The Fiddler.” I’m hoping to make some more exciting announcements about this soon. But in the meantime, sign up (for free!) to keep the puzzle goodness going.

In the meantime, back to this week’s puzzle, which will be very familiar to many of you!

Some readers may be familiar with the first, second, third, fourth, fifth, sixth and seventh Battles for Riddler Nation. If you missed out, you may want to consult the thousands of attack distributions from some of these previous contests.

This week marks the eighth (and final!) such competition. As with the last few battles, I am once again tweaking the rules.

In a distant, war-torn land, there are 10 castles. There are two warlords: you and your archenemy. Each castle has its own strategic value for a would-be conqueror. Specifically, the castles are worth 1, 2, 3, … , 9 and 10 victory points. You and your enemy each have 100 soldiers to distribute, any way you like, to fight at any of the 10 castles. Whoever sends more soldiers to a given castle conquers that castle and wins its victory points. If you each send the same number of troops, you split the points. You don’t know what distribution of forces your enemy has chosen until the battles begin. Whoever wins the most points wins the war.

Submit a plan distributing your 100 soldiers among the 10 castles. Once I receive all your battle plans, I will adjudicate all the possible one-on-one matchups. A victory will be worth one “victory point,” while a tie will be worth 0.5 victory points. After all the one-on-one matchups are complete, whoever has accumulated the *fewest* victory points will be eliminated from the tournament, after which the battle will recommence with one fewer competitor.

If two warlords are tied with the fewest number of victory points, the first tiebreaker will be whoever has more wins (and fewer ties) and the second tiebreaker will be performance in the preceding round (and then the round before that, etc.). If two or more strategies on the chopping block are precisely the same, I will randomly pick which one to eliminate.

Whoever survives *every* round will be crowned the last king or queen of Riddler Nation!

Congratulations to Matt Carlton of Los Osos, California, winner of last week’s Riddler Express.

Last week, two teams were starting a competition with the flip of a coin *and* a roll of a die. That is, one team’s captain flipped a coin while the other team’s captain rolled a die. They continued doing this until the coin was the same (whether heads or tails) for *three* consecutive flips or the number that was face-up on the die was the same for *two* consecutive rolls.

On average, how many coin flips did it take to get three in a row? And how many die rolls did it take to get two in a row?

Let’s start with the coins, which was arguably the trickier of the two. Suppose the expected number flips needed to get three in a row from the outset was *E*(0). Similarly, let’s call the expected number of remaining flips to get three in a row when the current sequence ended in one in a row — regardless of whether that was heads or tails — was *E*(1), two in a row was *E*(2) and three in a row was *E*(3).

From there, solver Emilie Mitchell set up a system of equations between these values, allowing her to solve for each of them. Since the goal was to get three in a row, *E*(3) was simply zero — you didn’t need any additional flips to get three in a row. Working backwards, when your sequence of flips ended with two in a row, your next flip gave you a 50 percent chance of getting three in a row and a 50 percent chance of reverting to just one in a row. Mathematically, that meant *E*(2) was equal to 1+*E*(1)/2.

Meanwhile, when your sequence of flips ended with one in a row, your next flip gave you a 50 percent chance of getting two in a row and a 50 percent chance of reverting to just one in a row. Mathematically, that meant *E*(1) = 1 + *E*(2)/2 + *E*(1)/2. Finally, when you hadn’t yet flipped even once, your first flip was guaranteed to give you one in a row. Mathematically, this meant *E*(0) = 1 + *E*(1).

Combining these equations gave you *E*(3) = 0, *E*(2) = 4, *E*(1) = 6 and *E*(0) = 7. And so, on average, it took the first captain **seven flips** to get three heads or tails in a row.

Now for the die. After the first roll, the probability of getting two in a row was 1/6 for every subsequent roll. That meant the probability of getting two in a row in two rolls was 1/6, in three rolls it was (5/6)ᐧ(1/6), in four rolls it was (5/6)^{2}ᐧ(1/6), and so on. The expected number of rolls was therefore 2ᐧ(1/6) + 3ᐧ(5/6)ᐧ(1/6) + 4ᐧ(5/6)^{2}ᐧ(1/6) + 5ᐧ(5/6)^{3}ᐧ(1/6) + …. The sum of this arithmetic-geometric series turned out to be 7.

And so, on average, it took the second captain** seven rolls** to get two in a row.

I for one thought it was pretty neat that these two events — three in a row for flip with two outcomes and two in a row for a roll with six outcomes — occurred after the same number of attempts, on average.

But for extra credit, you were asked which event was more likely to occur *first*. It was tempting to think that both were equally likely to occur first, since they both occurred with the same average number of attempts. However, their probability distributions — that is, the probability with which they first occurred after a given number of attempts — were markedly different, as illustrated by solver Josh Silverman:

Sampling these two distributions revealed that **two in a row for the die was more likely to come first**, with a probability of 29/59, or about 49 percent of the time. By comparison, three in a row came first for the coin with a probability of 25/59. And, as you might have surmised at this point, they occurred at the same time with probability 5/9.

Congratulations to Gary M. Gerken of Littleton, Colorado, winner of last week’s Riddler Classic.

Last week, you studied the “middle-square method” for generating four-digit pseudorandom numbers (numbers that appear random, but are derived in a deterministic sequence).

According to this method, you started with a four-digit number, such as 9,876. When you squared it, you get the eight-digit number 97,535,376. Your next pseudorandom number was taken from the middle four digits of that square: 5,353. And to get the *next* pseudorandom number, you squared 5,353, which gives you 28,654,609, the middle four digits of which were 6,546.

Of course, many four-digit numbers had a square that’s seven digits rather than eight. And in a sequence of middle-square numbers, you’d likely encounter smaller numbers whose squares had six or fewer digits. In these cases, you could append zeros to the beginning of the square until you had eight total digits, once again taking the middle four digits.

No matter what initial four-digit number you picked, your sequence of pseudorandom numbers would eventually repeat itself in a loop. If you wanted the longest sequence of such numbers before any repetition occurred, what starting number should you have picked? And how many unique numbers were in the sequence?

Many puzzles in this column require analytical acumen, whereas others require computational cleverness. (In my opinion, some of the very best puzzles require *both*.) This puzzle definitely erred on the side of computation, as there were 10,000 possibilities to test (from the digits 0000 up through 9999) and no clear intuition into which four-digit numbers most gradually descended into a cycle. Virtually every solver wrote code or used a digital spreadsheet to find the answer.

Solver Michael Branicky’s Python code correctly determined the answer was **6,239**, which resulted in a whopping **111 unique numbers**. The final four unique numbers in that sequence were 4,100, 8,100, 6,100 and 2,100, which together formed a four-number loop.

Solver Tom Conroy made a directed map that shows the broader structure at play here:

Tom found that there were three four-number cycles and five fixed points (i.e., numbers whose middle-square resulted in themselves), the most surprising of which turned out to be 3,792, whose square is 14,379,264. Based on Tom’s graph, it appeared that the majority of four-digit numbers fell into the same eventual loop as 6,239 did.

Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now!

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either, win , I need to receive your correct answer before 11:59 p.m. Eastern time on Monday. Have a great weekend!</p>
</p>">^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

Friday, June 30 will mark the final column for The Riddler here at FiveThirtyEight. But this isn’t *quite* the end. Next week, I will be running the Eighth and Final Battle for Riddler Nation, with results appearing in the ultimate column the following week. And after that … stay tuned!

And now, without further ado, back to the puzzles!

With The Riddler nearing its end here at FiveThirtyEight, I can finally get something off my chest: Starting a competition with the flip of a coin (say, to determine possession of a ball) is so boring!

Instead, let’s give the captain of one team a fair coin and the captain of the other team a fair die. The captain with the coin will flip it at the same time the other captain rolls the die. They continue doing this until the coin is the same (whether heads or tails) for *three* consecutive flips or the number that comes face-up on the die is the same for *two* consecutive rolls.

On average, how many coin flips will it take to get three in a row? And how many die rolls will it take to get two in a row?

*Extra credit:* While the numbers of flips and rolls may often be the same, which team — the team with the coin or the team with the die — is more likely to *win* the toss/roll? (That is, which is more likely to happen sooner?)

Thanks to Twitter, I recently became aware of the “middle-square method” for generating pseudorandom numbers (numbers that appear random, but are derived in a deterministic sequence).

This week, let’s look at the middle-square method for generating pseudorandom four-digit numbers. First, we start with a four-digit number, such as 9,876. If we square it, we get the eight-digit number 97,535,376. Our next pseudorandom number is taken from the middle four digits of that square: 5,353. And to get the *next* pseudorandom number, we can square 5,353, which gives us 28,654,609, the middle four digits of which are 6,546.

Of course, many four-digit numbers have a square that’s seven digits rather than eight. And in a sequence of middle-square numbers, you also might encounter smaller numbers whose squares have six or fewer digits. In these cases, you can append zeros to the beginning of the square until you have eight total digits, and once again take the middle four digits.

No matter what initial four-digit number you pick, your sequence of pseudorandom numbers will eventually repeat itself in a loop. If you want the longest sequence of such numbers before any repetition occurs, what starting number should you pick? And how many unique numbers are in the sequence?

Congratulations to Brian Mercurio of Binghamton, New York, winner of last week’s Riddler Express.

Last week, you were a mission commander for the Riddler Space Agency, which was engaged in a space race with a competing agency. Both agencies were trying to claim regions of a newly discovered, perfectly spherical moon that possessed a magnetic field. Everywhere on the surface of this moon the magnetic field lines pointed from the north pole to the south pole, *parallel* to the surface (i.e., the magnetic field did not point into or out of the moon’s volume).

While you knew your team would reach the moon first, the politicians in charge entered into a rather bizarre agreement with the competition. Wherever your team landed on the moon, all the points on the surface whose magnetic field lines pointed in the direction of your landing site — that is, where the magnetic field pointed more *toward* your landing site than *away* from it — would belong to Riddler Nation. All the remaining parts of the surface would go to the competing agency’s nation.

If your team landed on a random point on this moon’s surface, then what was the expected fraction of the moon’s surface area that would be claimed by Riddler Nation?

At first, this seemed to be a rather challenging puzzle. For some landing points, what happened was pretty clear. For example, if you landed on the magnetic north pole of the moon, Riddler Nation got nothing. But if you landed on the magnetic south pole, then the entire moon would be claimed by Riddler Nation. But for other landing spots, determining the locus of points whose field lines were oriented more toward *A* — or whose magnetic field vectors had a positive component in *A*’s direction, you might say — was trickier to determine.

Fortunately, most solvers recognized there was a way to navigate around this three-dimensional mess. It was true that the region claimed by landing at a point *A* could be hard to calculate. However, *every* point on the sphere’s surface had a field line that either pointed more toward *A* or more toward the antipode of *A* — that is, the point on the diametrically opposite side of the moon. (Yes, there were points whose lines didn’t point toward either *A* or the antipode of *A*, but when calculating probabilities, this one-dimensional set of points was negligible compared to the entire two-dimensional surface.)

And so, between a point *A* and its antipode, the entire surface of the moon was claimed. That meant the average area claimed by Riddler Nation between A and its antipode was *half* the surface (i.e., one whole surface divided by two points). And since the entire surface could be split up into pairs of antipodal points, that meant on average you claimed **half** the surface.

Congratulations to Reid Price of Palo Alto, California, winner of last week’s Riddler Classic.

Last week, you were stopped by a troll while trying to cross a bridge. The troll was willing to grant you passage to the other side, provided you could estimate the factorial of a number *N*. (The troll kindly reminded you that the factorial — written with an exclamation point — of a whole number is the product of all the whole numbers from 1 to that number. For example, 5! is the product of the whole numbers from 1 to 5, so it’s 120.)

That was no problem, you thought, as you whipped your calculator out of your pocket. In addition to the 10 digits and a decimal point, your calculator could add, subtract, multiply, divide and exponentiate. And it even had a factorial button. Or rather, it used to …

It appeared that the devious troll somehow magically removed the factorial button from your calculator, replacing it with a button labeled *N*, which loaded the value of *N* from the calculator’s memory whenever you pressed it. While you didn’t yet know the precise value of *N*, the troll informed you it was no more than 200.

To pass the bridge, you had to use your calculator to estimate *N*! to within two orders of magnitude — that is, your answer had to be within a factor of 100 of the exact value of *N*!.

What expression would you have typed into your calculator?

A few readers opted to multiply *N* by *N*−1, and then that product by *N*−2, and then that product by *N*−3, and so on. While it wasn’t entirely clear from the puzzle if you could actually read out the value of *N* from the calculator — and therefore stop multiplying when you reached *N*−(*N*−1) — multiplying all the numbers from 1 to *N* could easily have resulted in a mistake. Even if you didn’t make any mistakes, the troll might very well have grown impatient and simply tossed you off the bridge.

Rather than find the exact value of *N*!, this puzzle was really asking you to find a decent approximation — that is, to within a factor of 10^{2} — that worked for all values of *N* from 1 to 200. Most solvers recalled that when it comes to factorial estimations, Stirling’s approximation is the go-to formula. The approximation is typically written in logarithmic form: ln(*N*!) ≈ *N*ln(*N*) − *N*. Here, ln is the natural logarithm function.

Raising *e* to both sides of the Stirling approximation and rearranging using a few exponential identities gave an equivalent form of the approximation: *N*! ≈ (*N*/*e*)* ^{N}*. Before moving further, I just want to point out how cool this approximation is. Multiplying every number from 1 to

Anyway, since your calculator didn’t have a button for *e*, Reid (this week’s winner) used 2.7 instead. Indeed, for all *N* between 1 and 200, both (*N*/*e*)* ^{N} *and (

Of course, Stirling’s approximation is just an *approximation*, and can be further refined. Instead of submitting (*N*/*e*)* ^{N}* to the troll, you could have multiplied this by √(2𝜋

Finally, a few folks safely crossed the bridge *without* using or modifying Stirling’s approximation at all. In particular, solver Austin Shapiro picked five points spaced along the graph of log(*N*!) between 1 and 200, and found the unique quartic polynomial that passed through those five points. Austin’s approximation for *N*! was then 10 raised to this quartic polynomial, or 10^(0.000000119·*N*^{4} − 0.0000606·*N*^{3} + 0.0127·*N*^{2} + 0.82·*N* − 1.48). It was a noisier approximation, as shown by the wavy purple points in the graph, and it required more keystrokes. But how could the troll not appreciate Austin’s style?

Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now!

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either, win , I need to receive your correct answer before 11:59 p.m. Eastern time on Monday. Have a great weekend!</p>
</p>">^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

Friday, June 30 will mark the final column for The Riddler here at FiveThirtyEight. If my math is right, there have been 375 columns (this being the 376th) over the past eight years, four under the stewardship of Oliver Roeder and another four under mine. Every moment has been an absolute pleasure, from reading (and attempting to solve) submitter’s puzzles, writing a few puzzles of my own, and especially marveling at the creative solutions and collaborations throughout Riddler Nation.

But this isn’t *quite* the end. On Friday, June 23 I will be running the Eighth and Final Battle for Riddler Nation, with results appearing in the ultimate column the following week. And after that … stay tuned!

If you’d like more weekly puzzles in math, logic and probability (and occasionally geometry, physics and beyond), please consider taking a one-minute survey that will help me plan some of my next steps:

And now, without further ado, back to the puzzles!

From Tim Curwick comes a puzzle of pointers:

You’re a mission commander for the Riddler Space Agency, which is engaged in a space race with a competing agency. Both agencies are trying to claim regions of a newly discovered, perfectly spherical moon that possesses a magnetic field. Everywhere on the surface of this moon the magnetic field lines point from the north pole to the south pole, *parallel* to the surface (i.e., the magnetic field does not point into or out of the moon’s volume).

While your team will reach the moon first, the politicians in charge have entered into a rather bizarre agreement. Wherever your team lands on the moon, all the points on the surface whose magnetic field lines point in the direction of your landing site — that is, where the magnetic field points more *toward* your landing site than *away* from it — will belong to Riddler Nation. All the remaining parts of the surface will go to the competing agency’s nation.

If your team lands on a random point on this moon’s surface, then what is the expected fraction of the moon’s surface area that will be claimed by Riddler Nation?

While crossing a bridge one day, you find yourself stopped by a troll. The troll will grant you passage to the other side, provided you can estimate the factorial of a number *N*. (The troll kindly reminds you that the factorial — written with an exclamation point — of a whole number is the product of all the whole numbers from 1 to that number. For example, 5! is the product of the whole numbers from 1 to 5, so it’s 120.)

That’s no problem, you think, as you whip your calculator out of your pocket. In addition to the 10 digits and a decimal point, your calculator can add, subtract, multiply, divide and exponentiate. And it even has a factorial button. Or rather, it used to …

It appears that the devious troll somehow magically removed the factorial button from your calculator, replacing it with a button labeled *N*, which loads the value of *N* from the calculator’s memory whenever you press it. The troll has not revealed to you the precise value of *N*, even though your calculator knows what it is, but you *do* know that *N* is no more than 200.

To pass the bridge, you must use your calculator to estimate *N*! to within two orders of magnitude — that is, your answer must be within a factor of 100 of the exact value of *N*!.

What expression will you type into your calculator?

Congratulations to Aaron L. of Houston, winner of last week’s Riddler Express.

Last week, you were betting on a horse race at The Riddler Casino. The casino provided betting odds (in the American format) for each horse. For example, odds of -150 meant that for every $150 you bet, you won an additional $100. Meanwhile, odds of +150 meant that for every $100 you bet, you won an additional $150.

Now, to break even, a horse with -150 odds should win 60 percent of the time, while a horse with +150 odds should win 40 percent of the time. (Yes, both +100 and -100 correspond to a 50 percent chance of victory.) Of course, most casinos rig the odds such that betting on all the horses in a race would cause you to lose money.

But not The Riddler Casino! Here, a horse with -150 odds has *exactly *a 60 percent chance of winning, and a horse with +150 odds has *exactly* a 40 percent chance.

And so, last week, a five-horse race caught your eye. The odds for three of the horses were +100, +300 and +400. You couldn’t quite make out the odds for the last two horses, but you could see that they were both positive multiples of a hundred. What were the highest possible odds one of those last two horses could have had?

Since you knew The Riddler Casino offered fair odds, you could convert the odds of the first three horses directly into probabilities. The first horse had odds of +100, which meant for every $100 you bet, you won an additional $100. For the odds to be fair, this horse’s probability of winning had to be 1/2. The second horse had odds of +300, which meant its probability of winning was 1/4. The third horse had odds of +400, which meant its probability of winning was 1/5. In general, positive odds that are 100 times *x* corresponded to a probability of 1/(*x*+1).

You further knew that one of the five horses had to win, which meant their probabilities had to add to 1. The first three horses accounted for a collective probability of 1/2 + 1/4 + 1/5, or 19/20. That meant the last two horses had a combined 1-in-20 chance of winning. But what *were* their individual chances?

We already said that positive odds of 100 times *x* corresponded to a probability of 1/(*x*+1). So when *x* was a whole number, as you were told was the case for those last two horses, that meant the probability was a unit fraction (i.e., a fraction with a numerator of 1). This meant the last two horses had probabilities that could be written as 1/*a* and 1/*b*, where *a* and *b* were whole numbers and 1/*a* + 1/*b* = 1/20.

The puzzle was specifically asking for the *highest possible* odds for one of those last two horses, so you wanted to minimize one of those two probabilities, say, 1/*b*. You could minimize 1/*b* by maximizing 1/*a*, and the largest unit fraction less than 1/20 was 1/21. Setting *a* equal to 21 gave you 1/*b* = 1/20 − 1/21, which meant 1/*b* = 1/420, which was indeed the smallest possible unit fraction you could generate. (Equivalently, as recognized by solver Bowen Kerins, 1/420 was a value that was clearly associated with the *highest* odds.)

The last step was converting this probability back to betting odds. The last two horses had odds of +2,000 (for the horse with probability 1/21) and **+41,900 **(for the horse with probability 1/420).

Congratulations to Adam Richardson of Old Hickory, Tennessee, winner of last week’s Riddler Classic.

Last week, in a game show, there were three identical doors arranged in a row from left to right. The host of the show, “Monty,” chose one of the doors and placed a prize of $100 dollars behind it. There was no prize behind the other two doors. You were not present when Monty chose the door and placed the money behind it, so you couldn’t say for certain which door the prize is behind.

You were then brought to the stage and required to select one of the three doors to open. If the prize money was behind it, then you won! But if you guessed incorrectly, all was not lost. You could pay $80 to pick a second door. However, before you made that second selection (but after you paid the $80), Monty would give you a hint, telling you whether the prize was behind a door that was to the left or to the right of your first choice. (Note that this hint was only helpful when you previously selected the middle door.) If the prize wasn’t behind that second door, you could pay another $80 to try a third time.

You could assume that both you and Monty played with optimal strategies — you to maximize your expected net earnings (prize winnings minus payments for hints), and Monty to minimize the same. How much net earnings could you have expected to make on average?

You might have thought that you should have opened the middle door first. If the prize was behind it, then you won $100 without paying a cent, which was great! But if the prize *wasn’t* behind the middle door, then you could pay $80 for a hint and another pick. Of course, that hint told you exactly where the prize was, since there was only one door to the left of the middle and one door to the right. After paying the $80, you were guaranteed to win $100 with your next selection. And so, with this strategy, you either won $100 outright or you made a profit of $20.

Now, had Monty been onto your strategy, he would definitely have placed the prize behind one of the two side doors rather than the middle, which meant you never actually won the $100 and instead made only $20. So rather than *always* pick the middle door first, it was worth exploring a mixed strategy, whereby you sometimes picked the middle door and other times picked a side door. Monty would likely do the same. And with a two-player mixed strategy contest, this puzzle comfortably fell within the realm of game theory.

Suppose you picked the middle door with probability *p* and each side door with probability (1−*p*)/2. Meanwhile, suppose Monty placed the prize behind the middle door with probability *a* and a side door with probability *q *and each side door with probability (1−*q*)/2. What were your expected winnings, in terms of *p* and *q*?

Your chances of picking the prize door outright — whether it was the middle door or a side door — were *pq* + (1−*p*)(1−*q*)/2, in which case you won $100. If you picked the middle door first but were incorrect, which occurred with probability *p*(1−*q*), the hint ensured you always guessed correctly on your next attempt, which meant your net winnings were $20. If you picked a side door but were incorrect, which occurred with probability (1−*p*)*q* + (1−*p*)(1−*q*)/2, you still had two doors that potentially hid the prize, and it turned out to not be worthwhile to play further, meaning you walked away with no net profit or loss.

Putting these results together, your expected winnings in dollars were 100*pq* + 100(1−*p*)(1−*q*)/2 + 20*p*(1−*q*), which simplified to 50 − 30*p* − 50*q* + 130*pq*. It turned out that this game had a unique Nash equilibrium, which was plotted by solver Rohan Lewis below:

To analytically solve for this equilibrium, you could analyze how that previous expression varied with *p* and *q*. For any value of *q* (i.e., no matter what Monty’s strategy was), a particular value of *p* always resulted in maximum expected winnings. To compute that value, you could take the partial derivative with respect to *q* to get 130*p *− 50. Setting this equal to zero gave you the optimal value of *p* (for you), which was 5/13. Similarly, taking the partial derivative with respect to *p* gave you the expression 130*q* − 30, and setting this to zero gave you the optimal value of *q* (for Monty), which was 3/13.

With the game theory done, here’s how things played out: Monty placed the prize behind the middle door with probability 3/13 and behind each of the two side doors with probability 5/13. Then, you picked the middle door with probability 5/13 and each of the two side doors with probability 4/13. All this made intuitive sense — you were more likely to pick the middle door than either of the sides, whereas Monty favored the side doors to the middle.

Plugging in these values of *p* and *q* into the expression for expected winnings gave you a result of 500/13, or approximately **$38.46**. Indeed, that was more than the $20 had you always gone for the middle door and Monty had caught onto your scheme.

For extra credit, you played a similar game with Monty, but this time you had to pay $80 up front before selecting your first door. If the prize money remained $100, this particular game wasn’t worth playing. How much should the prize money have been to make this new game worthwhile?

Working through a similar analysis, such a game only became worthwhile when the prize exceeded **$144**.

Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now!

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

You are at The Riddler Casino, and you are betting on a horse race. The casino provides betting odds (in the American format) for each horse. For example, odds of -150 means that for every $150 you bet, you win an additional $100. Meanwhile, odds of +150 means that for every $100 you bet, you win an additional $150.

To break even, a horse with -150 odds should win 60 percent of the time, while a horse with +150 odds should win 40 percent of the time. (Yes, both +100 and -100 correspond to a 50 percent chance of victory.) Of course, most casinos rig the odds such that betting on all the horses in a race will cause you to lose money.

But not The Riddler Casino! Here, a horse with -150 odds has *exactly *a 60 percent chance of winning, and a horse with +150 odds has *exactly* a 40 percent chance.

Today, a five-horse race has caught your eye. The odds for three of the horses are +100, +300 and +400. You can’t quite make out the odds for the last two horses, but you can see that they’re both positive multiples of a hundred. What are the highest possible odds one of those last two horses can have?

From Chris Gerig comes a variant of the famed Monty Hall problem:

In a game show, there are three identical doors arranged in a row from left to right. The host of the show, “Monty,” chooses one of the doors and places a prize of $100 dollars behind it. There is no prize behind the other two doors. You are not present when Monty chooses the door and places the money behind it, so you cannot say for certain which door the prize is behind.

You are then brought to the stage and must select one of the three doors to open. If the prize money is behind it, then you win! But if you guess incorrectly, all is not lost. You can pay $80 to pick a second door. Before you make that second selection, however, Monty will give you a hint, telling you whether the prize is behind a door that’s to the left or to the right of your first choice. (Note that this hint is only helpful when you previously selected the middle door.) If the prize isn’t behind that second door, you can pay another $80 to try a third time.

Assume that both you and Monty play with optimal strategies — you to maximize your expected net earnings (prize winnings minus payments for hints), and Monty to minimize the same. How much net earnings can you expect to make on average?

*Extra credit:* Suppose Monty has you pay $80 up front before selecting your first door, and each subsequent selection (if you choose to make it) continues to cost $80. If the prize money remains $100, this game isn’t worth playing. How much should the prize money be to make the game worthwhile?

Congratulations to Ryan Steel of West Chester, Pennsylvania, winner of last week’s Riddler Express.

Last week, as a citizen of Riddler Nation, you were visiting the United States. Upon landing at an American airport, you wanted to exchange your 100 Riddlerian rupees for some American currency. Fortunately, you noticed a currency exchange station where it was possible to make a profit.

At the time, the dollar was known to be more valuable than the rupee. Now this station said they would give you *D* dollars for each rupee, where *D* was a decimal less than 1 that went to the hundredths place. So *D* could have been 0.99, 0.50 or 0.37, but not values like 0.117 or 1/𝜋. And when exchanging dollars back into rupees, the station used an exchange rate of *R*, where *R* was equal to 1/*D* *rounded to the nearest hundredth*. (Yes, that last part was very important.)

For example, suppose *D* was 0.53. In this case, when you traded in 100 rupees, you received $53. When trading the $53 dollars back, the station used an exchange rate of 1/0.53, or 1.88679…, which they rounded up to 1.89. And so returning the $53 got you 100.17 rupees — a net profit!

What value of *D* would have earned you the greatest profit for your 100 rupees? (Remember, *D* was a decimal that went to the hundredths place and was less than 1.)

First off, where in this process did the profit-making actually occur? Because *D* was a decimal rounded to the nearest hundredth, when you exchanged 100 rupees you always received a number of dollars of equal value. It was the exchange *back* from dollars to rupees where profiting was possible.

Many solvers derived an algebraic expression for how much money you made (or lost). You started with 100 rupees, and after the first exchange you had 100·*D* dollars. The exchange rate back was 1/*D* rounded to the nearest hundredth, which could be expressed mathematically as round(100/*D*)/100, where “round” is a function that rounds to the nearest whole number. In the end, you had 100·*D*·round(100/*D*)/100 rupees, which simplified to *D*·round(100/*D*).

From there, most solvers either wrote code to find the maximum of this function or, like Lise Andreasen, used a spreadsheet to test the values of *D* from 0.01 to 0.99. Solver Bryce Manifold plotted the percent difference between this function and your initial 100 rupees, as shown below. There were some intriguing behaviors in this graph, including some apparent oscillations and a conic shape that widened for larger values of *D*. The maximum occurred when *D* was **0.93**. In this case, your initial 100 rupees netted you 0.93·round(100/0.93), or 100.44 rupees — a whopping 0.44 percent increase.

While 0.44 percent may not sound too impressive, that little bit of leverage was all you would have needed to get your arbitrage scheme off the ground. If you had your computer make 158 such trades in rapid succession, you’d double your money. After 10,000 such trades, you’d have more money than the annual GDP of the entire world (although someone would likely have caught on to your scheme by that point).

Congratulations to Bill Neagle of Springfield, Missouri, winner of last week’s Riddler Classic.

Last week, there was a parking lot behind your office building with 10 spaces that were available on a first-come, first-serve basis. Those 10 spaces invariably filled up by 8 a.m., and the parking lot quickly emptied out at 5 p.m. sharp.

Every day, three of the 10 “early birds” who snagged spots before 8 a.m. left at random times between 10 a.m. and 3 p.m. and did not return that day. Knowing that some early birds left during that five-hour window, nine “stragglers” drove by the lot at random times between 10 a.m. and 3 p.m. If there was an available spot, a straggler immediately parked in the spot and didn’t leave until 5 p.m. If there was no open spot, a straggler immediately drove away from the lot and parked somewhere else, and didn’t return that day.

Suppose you were a straggler arriving at a random time between 10 a.m. and 3 p.m. What was the probability that you got a spot in the lot?

This puzzle was an exercise in combinatorics. In total, there were 12 people to keep track of, all of whom took their action — three leaving and nine arriving — at a random, independent time within the same interval. One way this scenario could have played out was with all the early birds leaving, followed by all the stragglers (S) arriving, the last of which was you (Y). We can document this as EEESSSSSSSSY — three early birds (E), followed by eight stragglers (S) and you (Y). In this case, you would *not* have gotten a spot in the lot. But for other orderings, like EESYESSSSSSS, you *would* have gotten a spot.

The total number of ways this parking scenario could have played out was equal to the number of ways to arrange three Es, eight Ss and one Y, which was 12!/(3!·8!·1!), or 1,980.

Among these equally likely cases, solvers like Bradon Zhang and Marissa Weichman divided the ones where you got a spot into four categories, based on the occurrence of the following four subsequences of letters:

- EY
- EESY
- EEESSY
- EESESY

After some careful consideration, you should be able to convince yourself that these four categories are mutually exclusive (i.e., an ordering can’t contain more than one of these sequences) and that the orderings with one of these subsequences are precisely the orderings in which you’d be able to park in the lot.

From there, you had to count up how many of the 1,980 sequences included each of these subsequences. For example, to find the number that included EY, out of the 12 total spots for letters, there were 11 in which the EY subsequence could have started. The remaining 10 letters included two Es and eight Ss, of which there were 10!/(2!·8!), or 45, ways to order them. And so there were 11·45, or 495, such orderings. Meanwhile, there were 72 orderings containing EESY, seven orderings containing EEESSY, and similarly another seven containing EESESY.

In total, there were 581 orderings that contained one of the four subsequences, which meant your probability of snagging a spot in the lot was **581/1,980**, or about 29.3 percent.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

As a citizen of Riddler Nation, you are visiting the United States. Upon landing at an American airport, you would like to exchange your 100 Riddlerian rupees for some American currency. Fortunately, you notice a currency exchange station where it might just be possible to make a profit.

The dollar is known to be more valuable than the rupee. Now this station says they will give you *D* dollars for each rupee, where *D* is a decimal less than 1 that goes to the hundredths place. So *D* can be 0.99, 0.50 or 0.37, but not values like 0.117 or 1/𝜋. And when exchanging dollars back into rupees, the station uses an exchange rate of *R*, where *R* is equal to 1/*D* *rounded to the nearest hundredth*. (Yes, that last part is very important.)

For example, suppose *D* is 0.53. In this case, when you trade in 100 rupees, you’ll receive $53. When trading the $53 dollars back, the station uses an exchange rate of 1/0.53, or 1.88679…, which they round up to 1.89. And so returning the $53 gets you 100.17 rupees — a net profit!

What value of *D* will earn you the greatest profit for your 100 rupees? (Remember, *D* is a decimal that goes to the hundredths place and is less than 1.)

From Dave Moran comes a practical parking puzzle:

There’s a parking lot behind Dave’s office building with 10 spaces that are available on a first-come, first-serve basis. Those 10 spaces invariably fill by 8 a.m., and the parking lot quickly empties out at 5 p.m. sharp.

Every day, three of the 10 “early birds” who snagged spots before 8 a.m. leave at random times between 10 a.m. and 3 p.m. and do not return that day. Knowing that some early birds leave during that five-hour window, nine “stragglers” drive by the lot at random times between 10 a.m. and 3 p.m. If there’s an available spot, a straggler immediately parks in the spot and doesn’t leave until 5 p.m. If there’s no open spot, a straggler immediately drives away from the lot and parks somewhere else, and doesn’t return that day.

Suppose you are a straggler arriving at a random time between 10 a.m. and 3 p.m. What is the probability that you will get a spot in the lot?

Congratulations to Billy Mullaney of Minneapolis, winner of last week’s Riddler Express.

In advance of the 2013-14 season, the NBA changed the format of the NBA Finals, a best-of-seven series. Previously, the Finals used a “2-3-2” format: Games 1, 2, 6 and 7 were played in the home arena of the higher-seeded team, while Games 3, 4 and 5 were in the arena of the lower-seeded team. With the change, the Finals moved to the “2-2-1-1-1” format: Games 1, 2, 5 and 7 were at the home of the higher-seeded team, while games 3, 4 and 6 were at the home of the lower-seeded team.

Last week, you were playing for the higher-seeded team heading into the Finals. While your team had a better record during the regular season, the two teams were evenly matched — at a neutral site, both teams were equally likely to win a game. But of course, no game in the Finals is played at a neutral site. Both teams had a 60 percent chance of winning each home game and a 40 percent chance of winning each away game.

Which format — 2-3-2 or 2-2-1-1-1 — gave your team a better chance of winning the Finals? (Or were they the same?)

A few solvers, like Emily Kelly, worked through the details, but that turned out to not be necessary with some careful thinking up front. We tend to break down best-of-seven series into different cases depending on how many games are played, which could be four, five, six or seven, each with different probabilities of occurring. But another way to imagine all this would be for the two teams to play *all seven games*, even if one of the teams has already won four of them. Playing those potentially extra games would have no effect on either team’s probability of winning the series, since the *first* team to win four of seven must be the *only* team to win four of seven.

So if you imagine playing all seven games no matter what, four of which are at home and three of which are on the road, then the ordering of which games are at home and which are on the road will have no effect on your chances of winning. Therefore, both formats (the 2-3-2 and the 2-2-1-1-1) gave you **the same chance** of winning.

For that matter, the series could have been 1-1-1-1-1-1-1 (alternating home and away with each game), 4-3 (the first four games at home) or 3-4 (the last four games at home). None of this mattered — all 7 choose 4 (or 35) arrangements gave you the same chance of winning the series.

For extra credit, you had to determine that chance of winning. This time around, working through the details like Emily did was required. Again, it may have been simpler to imagine you played all seven games rather than stopping as soon as one team won four of them. In the end, your probability of winning was 8,313/15,625, or **about 53.2 percent**.

Interestingly, when the home team had a 50+*x* percent chance of winning, then your chances of winning the series varied nonlinearly with *x*. For small values of *x*, the probability of a series win hovered closer to 50 percent. We already said you had a 53.2 percent chance of winning the series when the home team won 60 percent of the time. When the home team won 70 percent of the time, you won the series about 57 percent of the time. When the home team won 80 percent of the time, you won the series about 62.6 percent of the time. When the home team won 90 percent of the time, you won the series about 73.6 percent of the time. But when the home team won 100 percent of the time, home-court advantage meant you were guaranteed to win the series.

Congratulations to Matthew Pitcock of Chicago, winner of last week’s Riddler Classic.

Last week, you started with just the number 1 written on a slip of paper in a hat. You were going to draw from the hat 100 times, and each time you drew, you had a choice: If the number on the slip of paper you drew was *k*, then you could either receive *k* dollars or add *k* higher numbers to the hat.

For example, if the hat contained slips with the numbers 1 through 6 and you drew a 4, you could have either received $4 or received no money but added four more slips numbered 7, 8, 9 and 10 into the hat. In either case, the slip with the number 4 would have then been returned to the hat.

If you played this game perfectly — that is, to maximize the total amount of money you’d receive after all 100 rounds — how much money would you have expected to receive on average?

What made this puzzle interesting was the apparent tension between your choices: At each stage, did it make more sense to cash out what you could, or were you better off reinvesting and putting larger quantities into the hat, potentially boosting your earnings later on? Most solvers tackled this tension by working backward from when there were only a few rounds remaining.

Suppose you were drawing for the 100th (and last) time and that there were *N* numbers in the hat. Since this was your final chance to extract some cash from this game, you always took the money rather than put more numbers into the hat. You could expect to receive the average of the numbers from 1 to *N*, or (*N*+1)/2 dollars.

Now suppose you were drawing for the 99th time. Again, suppose there were *N* numbers in the hat (perhaps a different value from the *N* in the previous paragraph) and that you drew the number *k*. If you pocketed the money, you’d get *k* dollars this round and then an average of (*N*+1)/2 dollars in the final round, for a total average of *k*+(*N*+1)/2. But if you put *k* more numbers into the hat, you’d get zero dollars this round and an average of (*N*+*k*+1)/2 dollars in the final round. Since *k*+(*N*+1)/2 was always greater than (*N*+*k*+1)/2, you were better off taking the money here. In the end, you could expect to receive the average of *k*+(*N*+1)/2 for all values of *k* from 1 to *N*, which was *N*+1 dollars.

Next, suppose you were drawing for the 98th time. Once again, suppose there were *N* numbers in the hat and you drew the number *k*. If you took *k* dollars, your expected total after 100 drawings would have been *k*+*N*+1. If you instead added *k* more numbers to the hat, your expected total after 100 drawings would *again* have been *k*+*N*+1, the exact same result! So on the 98th drawing, it didn’t matter if you took the money or added more numbers to the hat. Either way, your expected final total was (3/2)·(*N*+1) dollars.

And now suppose you were drawing for the 97th time, with *N* numbers in the hat and a drawn number *k*. If you took the money, your expected total was *k*+(3/2)·(*N*+1). If you added *k* more numbers to the hat, your expected total was (3/2)·(*N*+*k*+1). This time around, you got more money on average by adding numbers to the hat, resulting in an expected total of (3/2)^{2}·(*N*+1) dollars.

Putting more numbers in the hat was also the better option when you drew for the 96th time, the 95th time, the 94th time and so on, with each prior drawing adding another factor of 3/2 to your expected total.

To summarize, you could maximize your expected earnings by *always* putting more numbers into the hat (regardless of what value *k* you drew) for the first 97 drawings. For the 98th, it didn’t matter if you took the money or added more numbers to the hat. And for the last two drawings, you took the money. In the end, your expected total winnings were (3/2)^{98}·(1+1), or **about 361.4 quadrillion (with a Q!) dollars**. For reference, this figure exceeded the annual GDP of the world by a few thousandfold.

Solver Laurent Lessard further explored the *distribution* your winnings could take when you played to maximize the average. The distribution had a log-normal appearance, which was why the arithmetic mean appeared to the right of the maximum. While you made 361.4 quadrillion dollars on average, you usually made less than that (and sometimes *a lot* more).

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

In advance of the 2013-14 season, the NBA changed the format of the NBA Finals, a best-of-seven series. Previously, the Finals used a “2-3-2” format: Games 1, 2, 6 and 7 were played in the home arena of the higher-seeded team, while Games 3, 4 and 5 were in the arena of the lower-seeded team. With the change, the Finals moved to the “2-2-1-1-1” format: Games 1, 2, 5 and 7 were at the home of the higher-seeded team, while games 3, 4 and 6 were at the home of the lower-seeded team.

Suppose you play for the higher-seeded team heading into the Finals. While your team had a better record during the regular season, the two teams are evenly matched — at a neutral site, both teams are equally likely to win a game. But of course, no game in the Finals is played at a neutral site. Both teams have a 60 percent chance of winning each home game and a 40 percent chance of winning each away game.

Which format — 2-3-2 or 2-2-1-1-1 — gives your team a better chance of winning the Finals? (Or are they the same?)

*Extra credit: *What are your team’s chances of winning the Finals under each format?

From Yonah Borns-Weil comes an opportunity to (probably) win a lot of money:

You start with just the number 1 written on a slip of paper in a hat. Initially, there are no other slips of paper in the hat. You will draw from the hat 100 times, and each time you draw, you have a choice: If the number on the slip of paper you draw is *k*, then you can either receive *k* dollars or add *k* higher numbers to the hat.

For example, if the hat were to contain slips with the numbers 1 through 6 and you drew a 4, you could either receive $4 or receive no money but add four more slips numbered 7, 8, 9 and 10 into the hat. In either case, the slip with the number 4 would then be returned to the hat.

If you play this game perfectly — that is, to maximize the total amount of money you’ll receive after all 100 rounds — how much money would you expect to receive on average?

Congratulations to Izumihara Ryoma of Toyooka, Japan, winner of last week’s Riddler Express.

Last week’s puzzle was inspired by the game Digit Party, in which you place 25 digits one at a time on a five-by-five board. Whenever two of the same digits are placed in adjacent squares (whether horizontally, vertically or diagonally adjacent), you get a number of points equal to the value of those two digits.

For example, if you place four 1s in a row, you get 3 points, as there are three adjacencies. If you instead place those 1s in a two-by-two square, you get 6 points, as there are now six adjacencies (including the two diagonals).

For last week’s puzzle, you had to place 16 1s on an infinite grid that was initially empty. What was the maximum number of points you could earn?

To maximize the number of connections between the 1s, you wanted an overall compact geometry. Many solvers started with a four-by-four square, which was certainly worth quite a few points:

Just *how many* points was this arrangement worth? There were 12 horizontal connections, 12 vertical connections and nine pairs of diagonal connections. In total, the four-by-four square was worth 42 points.

But it was possible to do even better! Instead of the square formation, consider the arrangement below:

This time around, there were 12 horizontal connections, 11 vertical connections, eight pairs of diagonal connections and four additional diagonal connections around the perimeter. In total, this arrangement was worth **43 points**, one more than the square, and was therefore the solution to this week’s puzzle.

For extra credit, the number of 1s you had to arrange increased from 16 to 100 and 1,000. The puzzle’s submitter (and one of the creators of Digit Party), Vince Vatter, shared a recent article that explored this problem in greater detail. It turns out that the maximum number of connections given *N *1s tracks very closely to *N*log(*N*) for a while, which you can see for yourself via the corresponding OEIS sequence. (Of course, this pattern can’t last forever, since the additional number of connections you get by adding another 1 is surely bounded.)

The article goes on to conjecture that the optimal arrangement can be produced via an “octagon spiral,” as illustrated below.

The article further states that the number of connections in an octagon spiral with *N* 1s is 4*N*−⌈√(28*N*−12)⌉. When *N* was 100, this expression was **347** and when *N* was 1,000, this expression was **3,832**.

Congratulations to Alexander Bolton of London, United Kingdom, winner of last week’s Riddler Classic.

Last week, aliens were visiting Earth to announce their intent to blow up the planet. (Lovely.)

However, they presented you with a challenge. If you successfully completed the challenge, they’d blow up another planet instead. (Probably Neptune, because why not.)

The aliens had telepathically assigned each of the 8 billion human beings on Earth a unique random number, uniformly distributed between 0 and 1. Each human being knew their own number, but no one else’s. Your challenge was to identify the person with the highest number.

The aliens allowed you to ask a single yes-or-no question to all 8 billion people. This question had to be the same for everyone and would be answered simultaneously by everyone. The aliens had courteously agreed to aggregate the data for you as to who answered your question yes or no.

What question would you have asked, and what were your chances of saving the world?

As with a previous Riddler Classic, your strategy was to ask all 8 billion people whether their number exceeded a certain threshold value, *t*. Then, among the folks who answered in the affirmative, you’d pick a random person and hope that they were the one with the highest number.

From there, all you had to do was figure out which value of *t* maximized your chances of saving the world. And to do that, you had to find an expression for this probability in terms of *t*.

The probability that a randomly selected person’s number was less than or equal to^{2} *t* was, well, *t*. And the probability their number was greater than *t* was 1−*t*. So the probability that *everyone’s* number was less than *t* was *t*^{8,000,000,000}. The probability that *exactly one* person had a number greater than *t* (an ideal scenario that guaranteed you’d save the world!) was (8,000,000,000)·(1−*t*)·*t*^{7,999,999,999}. In this expression, the coefficient of 8 billion was due to the fact that any of the 8 billion people in the world could have been the one with the number greater than *t*.

The probability that *exactly two *people had a number greater than *t* was 8 billion choose 2, or (8,000,000,000)·(7,999,999,999)/2, times (1−*t*)^{2}·*t*^{7,999,999,998}. But that wasn’t your probability of saving the world in this scenario. Among the two people whose numbers exceeded *t*, you had to guess which one had the greater value, which you had a 50 percent chance of doing correctly. So the probability that exactly two people had a number greater than *t* *and* you saved the world was (8,000,000,000)·(7,999,999,999)/2·(1−*t*)^{2}·*t*^{7,999,999,998}/3. In general, the probability that exactly *N* (greater than zero) people had a number greater than *t* *and* you saved the world was 8 billion choose *N* times (1−*t*)* ^{N}*·

At this point, you had to add these expressions for all values of *N* between 1 and 8 billion, although for values of *t* close to 1 many of these expressions with larger values of *N* were negligible. Some solvers did this by summing most or all of these 8 billion terms using a computer, while others, like Josh Silverman, used approximations that worked well for large populations like 8 billion. In the end, the probability of saving the world was maximized when *t* was approximately 1−1.88×10^{-10}. And so, the question you would have asked everyone was, **“Is your number greater than 1−1.88×10**^{-10}**?”**

The probability of saving the world was very sensitive to this value of *t*, as Josh demonstrated with the figure shown below. Interestingly, your chances of saving the world were greater than 50 percent! With the optimal value of *t*, this probability turned out to be about **51.7 percent** — just a little better than the flip of a coin. So — are you feeling lucky?

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

From Vince Vatter comes a puzzle about a “party” game that’s all the rage these days:

In the game Digit Party (of which Vince was one of the creators!), you place 25 digits one at a time on a five-by-five board. Whenever two of the same digits are placed in adjacent squares (whether horizontally, vertically or diagonally adjacent), you get a number of points equal to the sum of those two digits.

For example, if you place four 1s in a row, you get 6 points — there are three adjacencies, each worth 1+1 points. If you instead place those 1s in a two-by-two square, you get 12 points — there are now six adjacencies (including the two diagonals), each worth 1+1 points.

The game is a lot of fun, and this week’s Express is inspired by it.

Suppose you have to place 16 1s on an infinite grid that is initially empty. What is the maximum number of points you can earn?

*Extra credit:* Suppose you have to place 100 1s on an infinite grid that is initially empty. What is the maximum number of points you can earn? What about 1,000 1s?

From Paul Pudaite comes an opportunity for you to save the world:

Later this year, aliens will visit Earth and announce that they intend to blow up the planet. (Lovely.)

However, they present you with a challenge. If you successfully complete the challenge, they’ll blow up another planet instead (probably Neptune, because why not).

The aliens have telepathically assigned each of the 8 billion human beings on Earth a unique random number, uniformly distributed between 0 and 1. Each human being knows their own number, but no one else’s. Your challenge is to identify the person with the highest number.

The aliens will allow you to ask a single yes-or-no question^{2} to all 8 billion people. This question must be the same for everyone and will be answered simultaneously by everyone. The aliens have courteously agreed to aggregate the data for you as to who answers your question yes or no.

What question would you ask, and what are your chances of saving the world?

Congratulations to Shantanu Gangal of Mumbai, India, winner of last week’s Riddler Express.

Last week, a grasshopper was jumping on a number line and started at its home at zero (i.e., the “origin”). Its *N*th jump had length 1/2* ^{N}*, so its first jump had length 1/2, its second jump had length 1/4, its third jump had length 1/8 and so on.

However, before the jumping began, it drank a little too much grasshopper juice and lost all sense of direction. For each jump, it hopped left or right along the number line with equal probability.

After infinitely many jumps, the grasshopper’s head was once again clear and it wanted to return home to the origin. On average, what was the expected distance it traveled to return home? (Note that no matter which side of the origin the grasshopper was on, “distance” was defined as being zero or positive, but couldn’t be negative.)

Since the grasshopper was equally likely to jump left or right at any given point, by symmetry its average position always remained zero. That is, averaging the various positive landing positions to the right of the origin and the negative positions to the left of the origin always gave you exactly zero. And so some readers thought zero was the answer.

However, this puzzle asked for the average *distance to the origin*, which was defined as positive whether the grasshopper was to the left or to the right. So while the average position was always zero, the average distance to the origin was positive as soon as the grasshopper took its first jump.

Suppose the grasshopper’s first jump was to the right, meaning it was now at 1/2. Since subsequent jumps were equally likely to be left or right, its average position from this point forward remained at 1/2. What’s more, there was no way for the grasshopper to ever cross over to the left side of the origin again. Even if all the remaining jumps were to the left, the grasshopper would have been at 1/4, then 1/8, then 1/16 and so on, returning to the origin after infinitely many jumps. Because you never had to worry about crossing the origin again, the average distance the grasshopper had to travel was simply 1/2.

Now suppose the grasshopper’s first jump was to the left, meaning it was now at -1/2. This time around, it could never return to positive territory, even if all the remaining jumps were to the right. And so, once again, the average distance the grasshopper had to travel was simply 1/2.

That first jump was equally likely to be left or right, so the overall average distance was the average of 1/2 and 1/2, which was, of course, **1/2**.

For extra credit, the grasshopper’s *N*th jump was no longer 1/2* ^{N}*, but rather the slightly larger 2

With this greater jumping distance, it was now possible for the grasshopper to cross the origin. For example, suppose the first jump was to the right, so that the grasshopper was at 2/3. If all remaining jumps were to the left, they totaled 4/9 + 8/27 + 16/81 + …, a geometric series that summed to 4/5. After infinitely many jumps, the grasshopper’s final position was 2/3 − 4/5, or -2/15 — i.e., in negative territory. Because the grasshopper could cross the origin (and even do it multiple times), calculating the average distance was much trickier.

Solver Josh Silverman even composed a two-verse limerick — in what I believe is a first for The Riddler — about the complexity of the extra credit:

A grasshopper’s steps scaled by *s*,

It hopped and it leapt with finesse.

With *s* under one-half,

Each point had one path,

A solvable riddle, no mess.

But when *s* expands,

Points have multiple strands.

Too long to fade,

Neat solving forbade,

A labyrinthine mess on our hands.

A few solvers, including Michael Coffey and Charlie Drinnan, created vast probability trees to compute the average, which turned out to be **approximately 0.7421844**.

By the way, if you enjoyed this riddle, you might enjoy the 2003 paper, “Random walk with shrinking steps,” which analyzes the probability distribution of the grasshopper’s final position. Things really get crazy when the ratio of successive jumps is the golden ratio, as shown below:

Congratulations to Ivor Traber of Waterloo, Ontario, winner of last week’s Riddler Classic.

Last week, you were playing a board game, but unfortunately you were fresh out of dice. However, you *did* have a coin that you could use to simulate a die. For example, if you flipped the coin three times, then HHH could represent a 1, HHT a 2, HTH a 3, HTT a 4, THH a 5 and THT a 6. However, in this schema, the flips TTH and TTT were not assigned to a die roll, so if they came up then you’d have to start over and flip the coin another three times.

Facing the possibility of an unbounded number of coin flips, you instead turned to the one magic coin in your possession. You could choose any value of *p* between 0 and 1 and this coin would land on heads with probability *p*. But once you chose that value for *p*, it couldn’t be changed.

Using this magic coin, you wanted to simulate a die with at most *k* flips, so that *all* sequences of *k* flips could be split into six groups of equal probability.

What was the smallest value of *k* (i.e., the number of flips) you could use? And what was the corresponding value of *p*?

You immediately knew that *k* couldn’t be 1 or 2, since that didn’t even generate enough distinct cases to represent six die rolls. So the first value to check for *k* was 3. Once again, there were eight possible sequences of flips, which could be split into four categories of equal probability regardless of the value of *p*:

- HHH
- HHT, HTH and THH
- HTT, THT and TTH
- TTT

However, trying various combinations, you found that there was no way to reorganize these eight sequences into six groups of equal probability, no matter what value of *p* you used. The same was true when *k* was 4. (Solver Dan Swenson recognized this immediately, having solved a similar Riddler Classic several years ago.)

But when *k*** was 5**, the impossible suddenly became possible. There were 32 possible sequences of flips, which could be split into six categories:

- HHHHH
- Four heads and one tails (five permutations)
- Three heads and two tails (10 permutations)
- Two heads and three tails (10 permutations)
- One heads and four tails (five permutations)
- TTTTT

You could form five distinct groups, each composed of one permutation from the second category (e.g., HHHHT), two permutations from the third category, two permutations from the third category and one permutation from the fourth category. All five of these groups had equal total probabilities, and in terms of *p* these were *p*^{4}(1−*p*) + 2*p*^{3}(1−*p*)^{2} + 2*p*^{2}(1−*p*)^{3} + *p*(1−*p*)^{4}. To form the sixth group, you had to combine HHHHH and TTTTT, which had a combined probability of *p*^{5} + (1−*p*)^{5}.

To calculate the value of *p*, you had to set the two previous expressions equal to each other and solve a quintic equation. Thanks to the symmetry of how these six groups of flip sequences were organized, there were two possible solutions: *p*** could either be approximately 0.30334 or approximately 0.69666**.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Welcome to The Riddler. Every week,^{1} I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either, win , I need to receive your correct answer before 11:59 p.m. Eastern time on Monday. Have a great weekend!</p>
</p>">^{2} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

Inspired by a conversation with Matan Protter, this week’s Express involves a happily hopping grasshopper:

A grasshopper is jumping on a number line and starts at its home at zero (i.e., the “origin”). Its first jump will be length 1/2, its second jump will be length 1/4, its third jump will be length 1/8 and so on. (To make this explicit, its *N*th jump will be length 1/2* ^{N}*.)

However, before the jumping begins, it drinks a little too much grasshopper juice and loses all sense of direction. For each jump, it will hop left or right along the number line with equal probability.

After infinitely many jumps, the grasshopper’s head is once again clear and it wants to return home to the origin. On average, what is the expected distance it travels to return to the origin? (Note that no matter which side of the origin the grasshopper is on, “distance” is defined as being zero or positive, but cannot be negative.)

*Extra credit*: The next day, the grasshopper again jumps randomly from the origin, but this time with a little more enthusiasm, such that the *N*th jump has length 2* ^{N}*/3

From Daniel Sleator comes a puzzle for those who like to play board games but can’t stand to roll dice:

Daniel is playing a board game with his friends, but unfortunately he is fresh out of dice. He *does* have a coin, and his friends encourage him to use the coin to simulate a die. For example, if he flips the coin three times, then HHH could represent a 1, HHT a 2, HTH a 3, HTT a 4, THH a 5 and THT a 6. However, in this schema, the flips TTH and TTT are not assigned to a die roll, so if they come up then Daniel would have to start over and flip the coin another three times.

Facing the possibility of an unbounded number of coin flips, Daniel instead turns to the one magic coin in his possession. He can choose any value of *p* between 0 and 1 and the coin will land on heads with probability *p*. But once he has chosen a value for *p*, it cannot be changed.

Using this magic coin, Daniel wants to simulate a die with at most *k* flips, so that *all* sequences of *k* flips can be split into six groups of equal probability.

What is the smallest value of *k* (i.e., the number of flips) Daniel can use? And what is the corresponding value of *p*?

Congratulations to Sam Hamilton of Portland, Maine, winner of the last Riddler Express.

Last week, you had a toast rack with five slots, arranged in an array. Each slot had a slice of toasted bread, which you removed one at a time. However, you were quite superstitious, and you knew it was bad luck to remove adjacent pieces of toast one after the other. (What? You’ve never heard that before? It’s totally a thing!)

How many different ways could you have removed the slices of toast?

Before working with five slices of toast, several solvers got their feet wet by seeing what would have happened with fewer slices. These calculations were relatively straightforward:

- With one slice of toast, there was one way to remove it. Easy!
- With two slices of toast, there were no ways to remove them. Once you picked the first slice, the second one was guaranteed to be adjacent. Too bad!
- With three slices of toast, there were again no ways to remove them. If you started with the middle slice, the second was guaranteed to be adjacent. If you instead started at one of the ends, you next had to select the other end, followed by the middle (which was adjacent). Too bad!
- With four slices of toast, you finally had some options. You couldn’t remove the first slice, because next you’d either have to remove the third slice (which was adjacent to the second and fourth) or the fourth (which meant the last two you removed were the adjacent slices in the middle). One solution was to remove the second, then fourth, then first, then third, which could be represented with the sequence 2, 4, 1, 3. Reflecting this sequence across the middle of the toast rack gave you a second solution: 3, 1, 4, 2. These were the only two solutions for four pieces of toast.

While having four or fewer slices had a total of three solutions, once you hit five pieces of toast the number of solutions grew significantly. While some solvers worked things out by hand, others wrote some computer code to work out all the cases. In the end, there were 14 valid sequences for five slices of toast, which included seven sequences and their reflections:

- 1, 3, 5, 2, 4 and 5, 3, 1, 4, 2
- 1, 4, 2, 5, 3 and 5, 2, 4, 1, 3
- 2, 4, 1, 3, 5 and 4, 2, 5, 3, 1
- 2, 4, 1, 5, 3 and 4, 2, 5, 1, 3
- 2, 5, 3, 1, 4 and 4, 1, 3, 5, 2
- 3, 1, 4, 2, 5 and 3, 5, 2, 4, 1
- 3, 1, 5, 2, 4 and 3, 5, 1, 4, 2

Solver Marcus Dunn made the connection between this puzzle and what has been referred to as Hertzsprung’s problem, which asks: How many ways can you place *N* kings on an N-by-N chessboard, with one king in each row and one king in each column, such that no two kings are attacking each other? If you imagine placing the kings in the columns one column at a time, you can’t place consecutive kings in neighboring rows — otherwise they’d be able to attack each other. And this was perfectly analogous to not consecutively removing neighboring pieces of toast! For example, here’s how the sequence 1, 3, 5, 2, 4 could be represented with kings on a chessboard:

For extra credit, you had six slices of toast instead of five. Now how many different ways could you have removed the slices without ever removing adjacent pieces one after the other? While many solvers again solved with computer assistance, a few realized that Hertzsprung’s problem corresponded to sequence A002464 on OEIS. There were **90** distinct ways to remove six pieces of toast.

There *is* a formula for the number of ways to remove *N* pieces of toast as a function of *N* — but finding that formula is left as an exercise to the reader.

Congratulations to Jenny Mitchell of Nashville, Tennessee, winner of the last Riddler Classic.

Last week’s puzzle concerned The New York Times’s new math game, Digits. In each level of the game, you are presented with six numbers and the four basic operations (addition, subtraction, multiplication and division). For example, the six numbers in the game below are 2, 3, 5, 14, 25 and 15.

With each step of the game, you first pick a number, then an operation and then another number. So if you picked 15, then ×, then 5, the 15 and 5 would disappear and be replaced by 75, as shown below:

At this point, you could use the 75 similarly to how you use the other four numbers. The objective of the game is to use the numbers and operations to reach a specific target number. But let’s put that aside for now.

With this gameplay in mind, your challenge was to determine the greatest number of distinct values you could make using anywhere from one number by itself to all six numbers, or anywhere in between. For the purposes of this puzzle, you could take your pick of starting numbers. Also, negative numbers and fractions were allowed — they could be starting numbers or values generated along the way.

For puzzles like these — with presumably very large answers — I find it helpful to compute an upper bound. If the upper bound isn’t *too* large (i.e., if it’s somewhere in the millions), then a computational approach that tests every possible sequence of number and operation is a plausible strategy.

If you were using all six numbers, then you performed five operations. For the first, you chose among six numbers, then one of the four operations, and finally among the remaining five numbers. Then you repeated this process until you had only one number left. The total number of ways to do this was (6·4·5)·(5·4·4)·(4·4·3)·(3·4·2)·(2·4·1), a product that came to 88,473,600. If you didn’t use all six numbers, there weren’t as many cases to consider: 11,059,200 for five numbers, 460,800 for four numbers, 9,600 for three numbers, 120 for two numbers and six for one number. All told, there were 100,003,326 ways to combine the numbers, a figure within reach for today’s personal computers.

But many of these hundred million or so ways to combine the numbers were mathematically equivalent, and that was the Gordian knot to untangle with this puzzle. To see this, suppose you had three numbers: *a*, *b* and *c*. If you added them, the order by which you added them didn’t matter, since addition is commutative. But if you subtracted them, the order mattered and there were six potentially distinct results: (*a*−*b*)−*c*, (*b*−*a*)−*c*, (*c*−*a*)−*b*, *a*−(*b*−*c*), *b*−(*c*−*a*) and *c*−(*a*−*b*). (There were other ways to order the operations, but they’d always be equivalent to one of these six expressions.)

Solver Jenny Mitchell wrote Python code to navigate these cases and used various combinations of prime numbers for the six starting values. (With smaller or duplicative values, you could wind up with equivalent expressions that weren’t always equivalent. For example, while *a*·*b* isn’t equivalent to *a*+*b*, they happen to be equal when a and b are both 2.) In the end, Jenny correctly computed that the maximum number of distinct values you could generate in a game of Digits was **974,860**.

And wouldn’t you know it, Jenny also identified a closely related OEIS sequence for the “number of inequivalent expressions involving *N* operands.” Taking the elements of this sequence and multiplying by the corresponding number of ways to choose from among the six starting numbers gave an answer of 6·1 + 15·6 + 20·68 + 15·1,170 + 6·27,142 + 1·793,002, which again came to a grand total of 974,860.

If you want to hack your way through Digits, it’s worth knowing that you can generate almost a million distinct values. So maybe leveraging your number sense is a better way to go.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Tomorrow is Coronation Day in England — a special day when people around the world can take a breath and contemplate essential questions like:

- How many limited-edition pies can one celebration reasonably have?
- What does it mean for the future of polite society if Kate Middleton shows up in a floral headpiece instead of a tiara?
- Which is the smoother ride, the Gold State Coach or the Diamond Jubilee State Coach? (Hint: One of them has air conditioning.)
- Will the Stone of Destiny give its traditional groan when King Charles III sits on the throne? (I’m sorry, you have to click to find out.)

Which is to say, it’s a spectacle. A spectacle loaded with nostalgia and warm patriotic feelings for some — and for others, a reminder of the monarchy’s expensive and scandal-ridden past. As King Charles officially ascends to the throne, polling shows that while there are still plenty of people around the world who have a soft spot for the royal family’s pomp and circumstance, they tend to be older; younger people (and nonwhite people) are more skeptical about the British monarchy’s utility in the modern world.

King Charles’s most obvious problem is that he is far from the public’s favorite royal — at home or abroad. A poll of American adults conducted by YouGov from April 29 to May 2 found that while he’s not as unpopular as as some other family members — the highest unfavorability rating belongs to Prince Andrew, King Charles’s younger brother, who was stripped of his military honors and “royal patronages” after a woman accused him of raping her when she was a teenager — it’s safe to say that relatively few Americans have a soft spot for him. In fact, slightly more Americans have an unfavorable view of King Charles (40 percent) than a favorable view (39 percent), and only 31 percent of Americans think King Charles should have succeeded Queen Elizabeth II. (For context: 24 percent say his son, Prince William, should have succeeded Queen Elizabeth, 15 percent say no one should have succeeded Queen Elizabeth, and 30 percent said they don’t know.)

Luckily for King Charles, his approval ratings are less dismal on the other side of the Atlantic. A YouGov poll conducted in Britain in late April found that 62 percent of Britons have a positive view of Charles — much friendlier than their feelings toward his son, Prince Harry (25 percent), daughter-in-law Meghan, the Duchess of Sussex (24 percent) or Andrew (9 percent). But Charles is far from the most beloved member of the royal family, even at home. His other son and daughter-in-law, Prince William and the former Kate Middleton, have even higher approval ratings, as does Charles’s sister Princess Anne. And perhaps most damningly, his net approval rating — the share of Britons who approve minus the share who disapprove — is more than 20 percentage points lower than that of William, Kate and Anne.

Unlike elected politicians, King Charles can watch his approval ratings ebb and flow without worrying how to win the next election — his job is for life. But in some ways the stakes are even higher for him, since public perceptions of the monarch are usually tied to views of the royal family as a whole, and there are real questions about whether the British monarchy will survive the 21st century, at least in its current form. And underneath those favorability ratings is a lot of apathy about the institution of the monarchy as a whole — and skepticism about whether King Charles really understands the public he’s supposed to serve.

In other words, King Charles’s troubles extend beyond whether he’ll match the crowd sizes at his mother’s coronation in 1953, when an estimated three million people thronged to London to watch her crowned queen. A BBC/YouGov poll conducted in April found that while a majority of Britons (58 percent) think Britain should continue to have a monarchy and only 26 percent want an elected head of state instead, there is a huge generational gap. The vast majority of older Britons want the monarchy to continue, but the youngest group (18-to-24 year olds) is torn: Thirty-two percent think the country should continue to have a monarchy, while a plurality (38 percent) want an elected head of state, and 30 percent don’t know. White respondents were also much likelier than nonwhite respondents (62 percent vs. 38 percent, respectively) to say they want to retain the monarchy. And even so, Britons are unsure how their government should support the monarchy: A YouGov poll conducted in mid-April found that a slim majority (51 percent) of adults don’t think that the government should pay for the celebrations, which could cost taxpayers more than $100 million.

There’s some hope for the British royals in one sense — these polls, while not enthusiastic, also don’t signal a particularly strong anti-monarchist vibe. Whether King Charles is the best person to shepherd the monarchy into its next phase is another question, though. That BBC/YouGov poll found that British adults overall are more likely to say that King Charles is “out of touch with the experiences of the British public” (45 percent) than “in touch” (36 percent) — and once again, the age gap is substantial. A slim majority of Britons over the age of 65 think King Charles is in touch with their experiences, compared to only 16 percent of Britons between the ages of 18 and 24. A solid majority (59 percent) of that group sees King Charles as out of touch.

Perhaps once he’s officially on the throne, King Charles will finally be able to turn his PR problems around. For years, he’s been trying to fix his image issues — and those of the monarchy — by promising a less expensive royal family and emphasizing his commitment to environmental issues that disproportionately matter to younger voters. This American will refrain from early judgment on whether his approach — including the “sustainable” choice to rewear coronation robes that belonged to his mother and grandfather, presumably saving them from the landfill where gold-embroidered regalia usually ends up — will resonate with the youth of Britain going forward.

- Hollywood writers went on strike earlier this week after contract negotiations fell through, bringing production on many TV shows to a standstill. A YouGov poll conducted on Wednesday, the day after the strike was announced, found that Americans support the strike, overall: Fifty-eight percent of the adults surveyed said they support the writers’ strike, while 15 percent are opposed and 27 percent said they weren’t sure.
- An Ipsos poll asked Americans last week about potential abuses of artificial intelligence, and found that many are worried about the technology’s consequences in the not-too-distant future. According to the survey, 72 percent of Americans are worried that their data will be shared or that they won’t be able to reach a human when they want to, 70 percent are worried that more misinformation will spread online as the result of AI and 57 percent think the tools will discriminate or cause bias. A majority of Americans think the government should be involved in oversight of AI, but they’re split on whether the government should have a major role (38 percent) or a minor role (49 percent).
- Millennials aren’t doing well financially, according to a newly released March survey from Pymnts, a personal finance website, and LendingClub, an online lender. The survey found that more than 70 percent of millennials are living paycheck-to-paycheck, compared to 60 percent of adults overall. But it’s not all bad news: The survey also found that millennials have more money in their savings accounts, on average, than they did a year ago.

According to FiveThirtyEight’s presidential approval tracker,^{1} 42.7 percent of Americans approve of the job President Biden is doing, while 52.6 percent disapprove (a net approval rating of -9.7 points). At this time last week, 42.6 percent approved and 52.7 percent disapproved (a net approval rating of -10.1 points). One month ago, Biden had an approval rating of 42.8 percent and a disapproval rating of 52.7 percent, for a net approval rating of -9.9 points.

^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

From Jeremy Dixon comes a “toasty” treat of a puzzle:

You have a toast rack with five slots, arranged in an array. Each slot has a slice of toasted bread, which you are removing one at a time. However, you are quite superstitious, and you know it’s bad luck to remove adjacent pieces of toast one after the other. (What? You’ve never heard that before? It’s totally a thing!)

How many different ways can you remove the slices of toast?

*Extra credit:* Instead of five slots, suppose you have a rack with six slots and six slices of toast. Now how many different ways can you remove the slices without ever removing adjacent pieces one after the other?

The New York Times is beta-testing a new math game called Digits. In each level of the game, you are presented with six numbers and the four basic operations (addition, subtraction, multiplication and division). For example, the six numbers in the game below are 2, 3, 5, 14, 25 and 15.

With each step of the game, you first pick a number, then an operation and then another number. So if you picked 15, then ×, then 5, the 15 and 5 would disappear and be replaced by 75, as shown below:

At this point, you could use the 75 similarly to how you use the other four numbers. The objective of the game is to use the numbers and operations to reach a specific target number. But let’s put that aside for now.

Instead, the question here is this: What is the greatest number of distinct values you can make using anywhere from one number by itself, all six numbers or anywhere in between? For the purposes of this puzzle, you can take your pick of starting numbers. Also, negative numbers and fractions are allowed — they can be starting numbers or values generated along the way.

Congratulations to Steve Schaefer of Carlsbad, California, winner of last week’s Riddler Express.

Last week, you considered the infinite points in the coordinate plane, and supposed that each point is one of three colors: red, green or blue. It turned out there had to be at least two points of the same color that were a distance 1 apart. But it was up to *you* to prove it, using just seven points in the plane.

If the points could only be one of *two* colors (e.g., red and blue) rather than three, you only needed three such points, arranged in an equilateral triangle with side length 1. All three vertices were a distance 1 from each other, and at least two of them had to be the same color, whether red or blue.

But with three colors, the puzzle was a good deal trickier. Several readers thought to arrange the seven points into a regular hexagon with side length 1 plus a point at its center. However, it was possible to color these seven points in such a way that any pair of points a distance 1 apart were different colors. One such coloring is shown below, with the alternating red and green vertices and a blue point at the center.

The solution was a graph known as the Moser spindle, illustrated below by solver Peter Exterkate.

This “spindle” consisted of two rhombuses with side length 1, themselves composed of two equilateral triangles each. These rhombuses shared a common vertex, but they were rotated about this vertex in such a way that their opposite vertices were a distance 1 apart.

Whatever color you assigned to the common vertex, the four adjacent points had to be assigned the other two colors. That meant the two vertices opposite from the common vertex had to be the same color as the common vertex. Since these two vertices were themselves a distance 1 apart, you could conclude that the spindle needed at least *four* colors to avoid having points a distance 1 apart with the same color.

A fascinating extension of this puzzle, known as the Hadwiger-Nelson problem, asks for the minimum number of colors needed to color every point in the plane so that no points a distance 1 apart have the same color. You might intuitively think this number should be very large, but it turns out to be less than 8. The Moser spindle rules out 3, while 4 was also ruled out by several graphs between 2018 and 2020, containing hundreds or thousands of points.

The so-called “chromatic number” of the coordinate plane is therefore 5, 6 or 7 — and no one knows (yet) which of these three numbers is the right answer!

Congratulations to Eric Snyder of Everett, Washington, winner of last week’s Riddler Classic.

Last week, you explored a novel proof of the Pythagorean Theorem by high school students Ne’Kiya Jackson and Calcea Johnson from St. Mary’s Academy in New Orleans, Louisiana.

Their proof applied the law of sines (which itself can be derived from equivalent expressions for a triangle’s area and has *no dependency* on the Pythagorean theorem, thereby avoiding any circular logic) to the construction below:

Atop the figure were two reflected right triangles with legs *a* and *b *(with *a* < *b*) and hypotenuse *c*. Below these triangles were what the students called a “waffle cone” shape, formed between the extensions of one of the top triangle’s hypotenuse and a line that was perpendicular to the other hypotenuse.

In their proof, they computed distances *p* and *q*, where *p* extended from the leftmost vertex of the two triangles to the intersection of the lines, and *q* extended from the topmost vertex of the two triangles to the same intersection.

Your challenge was to determine expressions for *p* and *q* in terms of *a*, *b* and *c*. However, in doing so, you couldn’t use the Pythagorean theorem in any of its forms (e.g., the so-called “distance formula,” etc.). After all, solving for *p* and *q* was a key step toward *proving* the Pythagorean theorem!

Jackson and Johnson subdivided the waffle into infinitely many similar triangles and then used geometric series to compute expressions for *p* and *q*. Solver Jim Jacobson took a similar approach, generating the following diagrams to solve for *p* and *q*:

Summing the segments along the line with length *p* gave you 2*ac*/*b*·[1 + (a/b)^{2} + (a/b)^{4} + (a/b)^{6} + …], an infinite geometric series that added up to ** p = 2abc/(b^{2}−a^{2})**. Meanwhile, summing the segments along the line with length

Other solvers, like Amy Leblang and Rohan Lewis, instead split the waffle into two right triangles, one of which was similar to the original triangle:

In Rohan’s diagram, the unshaded triangle was similar to the overall construction. In particular, Rohan used the side lengths shown below:

Using the fact that these triangles were similar, that meant *c*/(2*a*^{2}/*c*) = *p*/(*p*−2*ab*/*c*) = *q*/(*q*−*c*). The equation between the first and second expressions allowed you to solve for *p*: *p*** = 2***abc***/(***c*^{2}**−2***a*^{2}**)**. The equation between the first and third expressions allowed you to solve for *q*: *q*** = c**^{3}**/(***c*^{2}**−2***a*^{2}**)**.

For extra credit, you had to use these expressions for *p* and *q* to complete a proof of the Pythagorean theorem. Jackson and Johnson did this using the law of sines. Another way to prove the theorem was to equate the two above expressions for *p*. They had the same numerator, and equating their denominators gave *b*^{2}−*a*^{2} = *c*^{2}−2*a*^{2}, which could be arranged to the more familiar *a*^{2}**+***b*^{2}** = ***c*** ^{2}**.

There are so many ways to prove the Pythagorean theorem. Thanks to Jackson, Johnson and this week’s solvers, we now have a few more!

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

This week’s Express is an oldie but a goodie:

Consider the infinite points in the coordinate plane, and suppose that each point is one of two colors: red or blue. It turns out there must be at least two points of the same color that are a distance 1 apart.

Why? Draw any equilateral triangle with side length 1. All three vertices are a distance 1 from each other, and at least two of them must be the same color, whether red or blue.

Now suppose every point in the plane is one of *three* colors: red, green or blue. Once again, it turns out there must be at least two points of the same color that are a distance 1 apart.

How can you show this is true using just seven points in the plane?

As you may have heard, two high school students from St. Mary’s Academy in New Orleans, Louisiana — Ne’Kiya Jackson and Calcea Johnson — recently discovered a novel proof of the Pythagorean Theorem.

Their proof applied the law of sines (which itself can be derived from equivalent expressions for a triangle’s area and has *no dependency* on the Pythagorean theorem, thereby avoiding any circular logic) to the construction below:

Atop the figure are two reflected right triangles with legs *a* and *b *(with *a* < *b*) and hypotenuse *c*. Below these triangles are what the students called a “waffle cone” shape, formed between the extensions of one of the top triangle’s hypotenuse and a line that’s perpendicular to the other hypotenuse.

In their proof, they compute distances *p* and *q*, where *p* extends from the leftmost vertex of the two triangles to the intersection of the lines, and *q* extends from the topmost vertex of the two triangles to the same intersection.

Your challenge is to determine expressions for *p* and *q* in terms of *a*, *b* and *c*. However, in doing so, you *absolutely cannot* use the Pythagorean theorem in any of its forms (e.g., the so-called “distance formula,” etc.). After all, solving for *p* and *q* is a key step toward *proving* the Pythagorean theorem.

*Extra credit:* Once you’ve determined *p* and *q*, try completing a proof of the Pythagorean theorem that makes use of them. Remember, the students used the law of sines at one point.

Congratulations to Max Chai of Foster City, California, winner of last week’s Riddler Express.

Last week, you and your family decided to decorate 10 beautiful Easter eggs. You pulled a fresh carton of eggs out of your fridge and removed 10 eggs. There were two eggs remaining in the carton, which you returned to the fridge.

The next day, you opened the carton again to find that the positions of the eggs had somehow changed — or so you thought. Perhaps the Easter Bunny was snooping around your fridge?

The 12 slots in the carton were arranged in a six-by-two array that was symmetric upon a 180-degree rotation, and the eggs were indistinguishable from each other. How many distinct ways were there to place two eggs in this carton? (Note: Putting two eggs in the two leftmost slots was considered the same as putting them in the two rightmost slots, since you could switch between these arrangements with a 180-degree rotation of the carton.)

First off, how many ways were there to place two eggs in a carton with 12 slots? That was 12 choose 2, or 66. Since some of these 66 ways were equivalent to each other after a 180-degree rotation, that meant the answer had to be less than 66.

The majority of these 66 arrangements could be paired up so that they turned into each other upon a 180-degree rotation of the carton. However, arrangements that were already symmetric were not paired up, as they merely turned back into themselves upon being rotated. There were six such symmetric arrangements, as shown below:

The remaining 60 arrangements formed 30 pairs, which meant the number of distinct ways to place two eggs in the carton was 30 + 6, or **36**.

For extra credit, you had to determine the number of distinct arrangements for other numbers of indistinguishable eggs between zero and 12. The “Packsize Riddle Solving Team” from Salt Lake City, Utah, extended the approach for two eggs to *x* eggs. There were 12 choose *x* ways to place the eggs in the carton, many of which paired up.

Now if *x* was odd, there was no way for the arrangement to turn back into itself after a 180-degree rotation. One way to convince yourself of this was that one half of the carton (e.g., the left half) had to have an even number of eggs while the other half had an odd number. After the rotation, that first half now had an odd number while the second half had an even number. There was no way for these arrangements to be the same. So when *x* was odd, the number of distinct arrangements was simply **(12 choose ***x***)/2**.

But when *x* was even, you had to subtract the symmetric arrangements before dividing by two. There were 6 choose (*x*/2) different ways to place half the eggs on the left half of the carton, and each of those had a single symmetric way to place the remaining eggs on the right half. So when *x* was even, the number of distinct arrangements was ((12 choose *x*) − (6 choose (*x*/2)))/2 + (6 choose (*x*/2)), which simplified to **((12 choose ***x***) + (6 choose (***x***/2)))/2**.

The maximum number of distinct arrangements (a whopping 472) occurred when *x* was 6. And across all possible values of *x* from 0 to 12, there were a grand total of 2,080 arrangements.

Congratulations to Izumihara Ryoma of Toyooka, Japan, winner of last week’s Riddler Classic.

Last week, you were the captain of a three-member crew (not including yourself): Geordi, Sidney and Alandra. Your ship had been captured by a previously unknown foe, who decided to return your ship if you could win a simple game.

Each of the three crew members was to be issued a number between zero and one, randomly and uniformly picked within that range. As the captain, your objective was to guess who had the highest number.

The catch was that you could only ask one yes-or-no question to each crew member. Based on the answer to the question you asked the first crew member, you could update the question you asked the second. Similarly, based on the answers to the first two questions, you could update the third question you asked. But in the end, you still had to guess which crew member had the highest number.

What was your optimal strategy, and what were your chances of regaining your ship?

Several readers interpreted the puzzle to mean that each crew member knew the numbers assigned to the *other* crew members. In this case, you could simply ask each crew member, “Do you have the largest number?,” thereby guaranteeing you’d know who had the largest number.

Yawn. The puzzles in this column are all about “math, logic and probability,” like it says at the top. So if you have some trivial interpretation, try reading the puzzle a different way or reach out to seek further clarification.

Now, this puzzle became *very* interesting when you assumed that each crew member knew *their own* number, but *not* those of their fellow crew mates. To see why, suppose for now that there were only two crew mates — say, Geordi and Sidney — instead of three.

In this case, you’d approach Geordi and ask him the only reasonable question you could: Is your number greater than *x*? I’m not saying (yet) what that number *x* is, but presumably, there was some value that optimized your overall chances of figuring out whether Geordi’s number or Sidney’s number was greater. If Geordi said yes, then you’d turn to Sidney and ask her if her number was greater than *y*. If she said yes, then you’d pick Sidney; otherwise, you’d pick Geordi. If Geordi said no, then you’d ask Sidney if her number was greater than *z*. Again, if she said yes, then you’d pick Sidney; otherwise, you’d pick Geordi.

But what were these values of *x*, *y* and *z*? If Geordi said yes, then his number was equally likely to be anywhere between *x* and 1, so the optimal value of *y* was halfway between these extremes, or (*x*+1)/2. And if Geordi said no, then his number was equally likely to be anywhere between 0 and *x*. Once again, the optimal value of *z* was halfway between these extremes, or *x*/2. With Geordi’s and Sidney’s numbers equally likely to be between zero and one, the diagram below highlights which coordinate pairs would lead you to guess *incorrectly*:

The area of this highlighted region was (*x*/2)^{2} + ((1−*x*)/2)^{2}, or (2*x*^{2}−2*x*+1)/4. This was minimized when its derivative was zero, i.e., when *x* = 1/2 and the area was 1/8. So when there were only two crew members, the cutoff value for Georgi’s question was 1/2 and the cutoffs for Sidney were 3/4 and 1/4. After all of this, your chances of correctly identifying who had the greater number was 7/8.

Whew! All that would have made for quite a challenging Riddler Classic. However, last week’s puzzle included a *third* crew member, Alandra, about whose number you could additionally inquire. Instead of a square, you now had a cube to consider.

In the end, Izumihara, this week’s winner (and the only person to solve the puzzle by the submission deadline, I might add) was able to identify a series of cutoff values that maximized your chances of regaining your ship. Below, I list the approximate values and the questions they’d correspond to:

- Geordi, is your value greater than 0.624334?
- If no: Sidney, is your value greater than 0.460442?
- If no: Alandra, is your value greater than 0.347818?
- If no: Guess Geordi
- If yes: Guess Alandra

- If yes: Alandra, is your value greater than 0.730221?
- If no: Guess Sidney
- If yes: Guess Alandra

- If no: Alandra, is your value greater than 0.347818?
- If yes: Sidney, is your value greater than 0.824920?
- If no: Alandra, is your value greater than 0.813443?
- If no: Guess Geordi
- If yes: Guess Alandra

- If yes: Alandra, is your value greater than 0.918159?
- If no: Guess Sidney
- If yes: Guess Alandra

- If no: Alandra, is your value greater than 0.813443?

- If no: Sidney, is your value greater than 0.460442?

With this set of questions, your chances of correctly identifying the crew member with the greatest number were approximately **82.395 percent**.

If you ever find yourself in this situation where your ship is captured, you have to play this game to secure your ship’s release and you’re short on time in deciding your strategy, you’d better hope you don’t have any more than two or three crew members. Could you imagine solving this puzzle with *four* crew members?

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Welcome to Pollapalooza, our weekly polling roundup.

What is art? According to the Russian writer Leo Tolstoy, the activity of art is “to evoke in oneself a feeling one has once experienced, and having evoked it in oneself, then, by means of movements, lines, colors, sounds, or forms expressed in words, so to transmit that feeling that others may experience the same feeling.” But, according to a new poll, many Americans believe art is just old paintings and sculptures they couldn’t make themselves.

Two weeks ago, YouGov showed 1,000 U.S. adult citizens pictures of famous art pieces and asked them questions about the works specifically and their views on art generally. It was a creative use of YouGov’s online polling methodology (after all, you can’t show pictures of art to respondents of a traditional phone poll) and revealed a lot about Americans’ artistic tastes.

According to the poll, 13 percent of Americans considered themselves very artistic, and another 36 percent considered themselves somewhat artistic. However, 48 percent considered themselves not very artistic or not artistic at all. And only 33 percent said that artistic painting came naturally to them (this number will become important in a minute).

Only 39 percent of Americans said that they were very or somewhat familiar with famous art movements or styles. In comparison, 58 percent said they were not very or not at all familiar with them. When asked about specific styles, people who were very, somewhat or not very familiar with artistic styles favored “classic art” the most.

Share of Americans who love, like, hate or dislike various artistic styles, among those who said they were very, somewhat or not very familiar with artistic styles

Style | Love or like | Dislike or hate | Diff. |
---|---|---|---|

Classic art | 87% | 7% | +80 |

Expressionism | 71 | 14 | +57 |

Street art | 71 | 18 | +53 |

Modern art | 70 | 19 | +51 |

Surreal art | 66 | 18 | +48 |

Digital art | 63 | 18 | +45 |

Abstract | 66 | 24 | +42 |

Pop art | 65 | 23 | +42 |

Eighty-seven percent of respondents said they either loved or liked “classic art,” versus just 7 percent who said they disliked or hated it. More recent artistic styles were still popular but had lower net favorability. For example, the turn-of-the-century style Expressionism had a +57 net favorability rating, Surrealism (which began around 1920) had a +48 rating and pop art (the 1950s and 1960s) clocked in at +42.

When YouGov asked respondents to react to seven specific paintings and sculptures, these preferences appeared again. At least 84 percent of American adults said they personally considered the four works created before 1900 to be art. But only 51 percent considered Piet Mondrian’s “Composition with Red, Blue and Yellow” (1930) to be art, and only 43 percent said the same of Sam Gilliam’s “Coffee Thyme I” (1980).

Share of Americans who consider seven famous paintings or sculptures to be art

Artwork | Year | Share who said it’s art |
---|---|---|

“David” by Michelangelo | 1504 | 84% |

“The Milkmaid” by Johannes Vermeer | 1660 | 90 |

“Self-Portrait with a Straw Hat” by Vincent Van Gogh | 1887 | 90 |

“The Basket of Apples” by Paul Cézanne | 1893 | 89 |

“Movements” by Marsden Hartley | 1913 | 76 |

“Composition with Red, Blue and Yellow” by Piet Mondrian | 1930 | 51 |

“Coffee Thyme I” by Sam Gilliam | 1980 | 43 |

YouGov also asked Americans if they thought they could replicate each of these works if given the appropriate materials and time. And despite their answer to the earlier question about their artistic skill, 65 percent of Americans thought they could definitely or probably reproduce the Gilliam painting, and 77 percent thought they could reproduce the Mondrian.^{1} That suggests that Americans’ relative distaste for modern art is related to its apparent simplicity. According to at least some Americans, something can be art only if it’s technically difficult to make. (Never mind the inspiration required to come up with the idea for the image in the first place.)

This poll is relevant to politics too. Last month, a principal in Tallahassee, Florida, resigned after failing to notify parents before sixth-grade students at her school were shown images of Michelangelo’s “David” (depicted nude). The incident touched off a debate over parental rights and the nature of art. YouGov asked about this, too. Seventy-five percent of respondents said that the statue is not pornographic (siding with Marge Simpson over Helen Lovejoy), while 16 percent thought it was. But 67 percent also said parents should be notified before their children are shown artwork depicting nudity, while 21 percent said parents didn’t need to be informed. And when YouGov straight-up asked if students of different ages should be shown a full-length photo of “David,” Americans were split. Seventy percent of respondents said it would be appropriate in high school, while 20 percent said it would not. But only 48 percent said it would be appropriate in middle school, while 38 percent said it would not. And only 31 percent thought it would be appropriate in elementary school, compared to 55 percent who thought it would be inappropriate.

The definition of art is subjective (in fact, a plurality of Americans said so in the poll). But, if you’re someone who believes that the majority should rule, polls provide a means to an objective answer. And that, in turn, raises uncomfortable questions: Can something still be considered art even if 95 percent of people think it isn’t? Or should public opinion about art even matter? Where should we draw the line?

I don’t have answers to these questions. But if I have to stay up at night pondering them, you do too.

- Last Friday, a judge in Texas suspended the Food and Drug Administration’s approval of the most commonly used abortion drug (although a higher court has since reinstated it, with restrictions). According to a Pew Research Center poll conducted just before that decision, though, banning medication abortion would be unpopular. Fifty-three percent of adults said it should be legal in their state, and only 22 percent said it should be illegal.
- Israeli Prime Minister Benjamin Netanyahu’s controversial plan to reform Israel’s judiciary has turned most Israelis against him. The plan, which would have given Netanyahu’s government the power to appoint judges and parliament the power to overturn their decisions, triggered mass protests, a general strike and cries of democratic backsliding before Netanyahu put it on pause late last month. According to a poll conducted on April 4 by Morning Consult, 76 percent of Israeli adults said the nation was on the wrong track, versus just 24 percent who said it was headed in the right direction. And Netanyahu’s net approval rating is a dismal -35 percentage points (28 percent to 63 percent), down 18 points since March 15.
- French President Emmanuel Macron also faced large-scale protests after forcing through an unpopular plan last month: a proposal to raise the retirement age for most French workers from 62 to 64. And according to Morning Consult, French adults now say their country is on the wrong track versus headed in the right direction, 81 percent to 19 percent, as of April 11. And Macron’s net approval rating is now -50 points (23 percent to 73 percent), down 24 points since last Christmas.
- According to the Pew Research Center, 17 percent of American adults said they have used a cryptocurrency, with young men leading the way (41 percent of men between 18 and 29 said they had used a cryptocurrency). However, the vast majority (75 percent) of Americans who have heard of cryptocurrency say they are not confident in its reliability or safety. Eighteen percent are somewhat confident, and only 6 percent are very or extremely confident.
- According to a new YouGov poll, 38 percent of Americans considered themselves introverts, and 22 percent considered themselves extroverts. However, 31 percent said they were about an equal mix of extroverted and introverted.

According to FiveThirtyEight’s presidential approval tracker,^{2} 43.0 percent of Americans approve of the job President Joe Biden is doing, while 52.2 percent disapprove (a net approval rating of -9.2 points). At this time last week, 42.7 percent approved and 52.8 percent disapproved (a net approval rating of -10.1 points). One month ago, Biden had an approval rating of 44.0 percent and a disapproval rating of 51.1 percent, for a net approval rating of -7.1 points.

^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

For Easter, you and your family decide to decorate 10 beautiful eggs. You pull a fresh carton of eggs out of your fridge and remove 10 eggs. There are two eggs remaining in the carton, which you return to the fridge.

The next day, you open the carton again to find that the positions of the eggs have somehow changed — or so you think. Perhaps the Easter Bunny was snooping around your fridge?

The 12 slots in the carton are arranged in a six-by-two array that is symmetric upon a 180-degree rotation, and the eggs are indistinguishable from each other. How many distinct ways are there to place two eggs in this carton? (Note: Putting two eggs in the two leftmost slots should be considered the same as putting them in the two rightmost slots, since you can switch between these arrangements with a 180-degree rotation of the carton.)

*Extra credit:* Instead of two eggs remaining, suppose you have other numbers of indistinguishable eggs between zero and 12. How many distinct ways are there to place these eggs in the carton?

From Nis Jørgensen comes a picaresque puzzle of a captain and crew:

You are the captain of a three-member crew (not including yourself): Geordi, Sidney and Alandra. Your ship has been captured by a previously unknown foe, who has decided to return your ship if you can win a simple game.

Each of the three crew members is to be issued a number between zero and one, randomly and uniformly picked within that range. As the captain, your objective is to guess who has the highest number.

The catch is that you can only ask one yes-or-no question to each crew member. Based on the answer to the question you ask the first crew member, you can update the question you’d ask the second. Similarly, based on the answers to the first two questions, you can update the third question you’d ask. But in the end, you still have to guess which crew member has the highest number.

What is your optimal strategy, and what are your chances of regaining your ship?

Congratulations to Sweet Tea Dorminy of Greenville, South Carolina, winner of last week’s Riddler Express.

Last week’s Express was submitted by high schooler Max Misterka, a winner of the 2023 Regeneron Science Talent Search. Max and I were playing a game in which we both picked a number in secret. Let’s call Max’s number *m* and my number *z*. After we both revealed our numbers, Max’s score was *m** ^{z}*, while my score was

When we played most recently, Max and I selected distinct whole numbers. Surprisingly, we tied — there was no winner! Which numbers did we pick?

Because Max and I had tied, that meant the whole numbers *m* and *z* satisfied the equality *m** ^{z}* =

This function increased for small values of *x*, reaching a maximum value when *x* was approximately 2.718 (i.e., *e*). Beyond this maximum, the function forever decreased, asymptotically approaching 1. Because the function was increasing and then decreasing, with no other change of direction in between, that meant either *m* or *z* had to be less than *e*, while the other number had to be greater than *e*. Let’s suppose that *m* was the smaller number.

At this point, there weren’t many options: *m* had to be either 1 or 2. If *m* had been 1, then you needed 1* ^{z}* =

For extra credit, you had to analyze another round of the game in which Max and I both picked positive numbers that weren’t necessarily whole numbers. I told Max my number without knowing his, at which point he told me the game was once again a tie. “Ah,” I replied, “that meant we must have picked the same number!” Which number did we both pick?

Mathematically, this meant that *f*(*m*) = *f*(*z*) implied that *m* and z were equal. For any value of *m* between 1 and *e*, there was a corresponding *z* greater than *e* such that *f*(*m*) = *f*(*z*). So for *f*(*m*) = *f*(*z*) to imply *m* = *z,* given that they were both at least 1, both *m* and *z* had to be ** e**. Alternatively, as noted by solver Fernando Mendez, they both could have been any positive number

Going up against a high schooler who’s in the top of his class in math and science, all I can say is I’m glad to have tied (rather than lost) both times we played this game.

Congratulations to Jason Winerip of Phoenix, Arizona, winner of last week’s Riddler Classic.

Last week, you were introduced to the sudoku-like game of Star Battle. In the five-star variant of the game, you were trying to fill a 21-by-21 grid with stars according to certain rules:

- Every row had to contain exactly five stars.
- Every column had to contain exactly five stars.
- Every bold outlined region had to contain exactly five stars.
- No two stars could be horizontally, vertically or diagonally adjacent.

For example, here was a solved game board:

In this example, the stars seemed to be rather evenly distributed throughout the board, although there were some gaps. In particular, this board had 20 empty two-by-two squares, highlighted below:

Some of these two-by-two regions overlapped — even so, they still counted as distinct.

In a solved board of Star Battle, what were the minimum and maximum possible numbers of empty two-by-two squares?

At first glance, this looked like a rather complicated combinatorial puzzle, or perhaps something that required lots of simulation. But as it turned out, you could figure this out with some relatively straightforward algebra!

Solver N. Scott Cardell started by getting a lay of the land. Each of the 21 rows had five stars, which meant there were 105 stars in total. Meanwhile, there 20^{2}, or 400, total two-by-two squares in the grid. Because stars couldn’t be adjacent, that meant any given two-by-two square had at most one star on it.

Now a star in one of the four corners occurred on exactly one of these two-by-two squares, while a star on one of the edges occurred on two such squares and a star in the interior of the grid occurred on four such squares. If there were *C* corner stars, *E* edge stars and *I* interior stars, the number of two-by-two squares with a star on them was *C* + 2*E* + 4*I*. Since there were 400 two-by-two squares in all, the number of squares *without* a star was 400 − (*C* + 2*E* + 4*I*).

Since the total number of stars was 105, that meant *C* + *E* + *I* = 105, or *I* = 105 − *E* − *C*. Moreover, because each edge (like any other row or column) had five stars, with corner stars being counted for two edges, you had *E *+ 2*C* = 20, or *E* = 20 − 2*C*.

At this point, you could algebraically eliminate variables from the expression for the number of empty two-by-two squares, 400 − (*C* + 2*E* + 4*I*). Plugging in 105 − *E* − *C* for *I* gave you 3*C* + 2*E* − 20. Finally, plugging in 20 − 2*C* for *E* gave you 20 − *C*.

After all that work, this was a surprisingly simple result. To find the number of empty two-by-two squares, all you had to do was count up the number of stars that were in the four corners and subtract that from 20. Sure enough, this was consistent with the solved game of Star Battle in the original puzzle: No stars were in a corner, and there were 20 empty two-by-two squares.

So what was the answer? The minimum number of empty two-by-two squares was **16**, when all four corners had stars. The maximum was **20**, when all four corners were devoid of stars. (In my opinion, this riddle turned out to be simpler than it seemed at first — as opposed to Star Battle itself, which is much trickier than it seems.)

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

The winners of the 2023 Regeneron Science Talent Search were announced on March 14. (Full disclosure: I was a finalist in this very same competition exactly one eternity ago. You can find me among a gaggle of fellow New Yorkers, if you look closely.)

I am delighted that one of this year’s winners was able to share his favorite puzzle for this week’s column!

Hailing from Harrisonburg, Virginia, high schooler Max Misterka studied quantum calculus, also known as q-calculus, extending it to a version he calls “s-calculus.” This week, Max is putting the quantum aside and challenging you to a puzzle that may or may not be solvable with traditional calculus:

Max and I are playing a game in which we both pick a number in secret. Let’s call Max’s number *m* and my number *z*. After we both reveal our numbers, Max’s score is *m** ^{z}*, while my score is

When we played most recently, Max and I selected distinct whole numbers. Surprisingly, we tied — there was no winner! Which numbers did we pick?

*Extra credit:* Max and I play another round. This time, we both pick positive numbers that are not necessarily whole numbers. I tell Max my number without knowing his, at which point he tells me the game is once again a tie. “Ah,” I reply, “that means we must have picked the same number!” Which number did we both pick?

From Ethan Rubin comes a matter of squeezing squares among the stars:

Ethan has been playing Star Battle, a sudoku-like game. In the five-star variant of the game, you are trying to fill a 21-by-21 grid with stars according to certain rules:

- Every row must contain exactly five stars.
- Every column must contain exactly five stars.
- Every bold outlined region must contain exactly five stars.
- No two stars can be horizontally, vertically or diagonally adjacent.

For example, here is a solved game board:

After playing the game, Ethan noticed that the stars seemed to be rather evenly distributed throughout the board, although there were some gaps. Specifically, he wondered how many distinct two-by-two squares in the grid *didn’t* contain a star. Here’s the same game board in which all 20 empty 2-by-2 squares are highlighted:

As you can see, some of these 2-by-2 regions overlap — even so, they still count as distinct.

In a solved board of Star Battle, what are the minimum and maximum possible numbers of empty 2-by-2 squares?

Congratulations to Thomas Stone of San Francisco, California, winner of last week’s Riddler Express.

I recently competed on “Jeopardy!” Heading into the Final Jeopardy! round, challenger Karen Morris was leading the way with $11,400, returning champion Melissa Klapper had $8,700 and I had $7,200. The Final Jeopardy! category was revealed as being “American Novelists,” and it was now time for all three of us to wager anywhere from $0 to the total amount we had for this final clue.

Despite the dramatic swings in the match, my assessment was that all three of us were somewhat evenly matched in terms of knowledge. Having studied my opponents, I was also confident that Karen would wager enough money to cover the most aggressive wager from Melissa, and that Melissa would wager enough to cover my most aggressive wager.

With these assumptions, it was logical for me to keep my own wager small, since my only chance at winning was if both Karen and Melissa guessed incorrectly. Not particularly liking the category, I chose to wager $0. What was the maximum dollar amount I could have wagered without affecting my chances of winning? (Again, you could assume Karen wagered enough to cover Melissa and Melissa wagered enough to cover me.)

If Melissa had bet everything she had and answered correctly, she would have doubled up, finishing with $17,400. To come out on top, Karen needed to finish with at least $17,401, which meant she’d have to wager at least $6,001. Similarly, in the event I had bet everything and answered correctly, I would have finished with $14,400. To finish with at least $14,401, Melissa needed to wager at least $5,701.

As I said earlier, I was hoping that both Karen and Melissa would get Final Jeopardy! wrong. In that case, Karen would have *lost* at least $6,001, so that her final total was at most $5,399. Similarly, Melissa would have lost at least $5,701, so that her final total was at most $2,999.

For me to have a shot at winning under these assumptions, I had to finish with more than the greater dollar amount of $5,399 and $2,999 (i.e., $5,399). To guarantee I had at most $5,400 by the end of the show, I should have wagered no more than $7,200 minus $5,400, or **$1,800**.

All the clues from my episode are available via J! Archive, which further provides wagering suggestions for Final Jeopardy! Sure enough, it recommends I not exceed $1,800. (It also recommends I wager at least $1,501 to cover a wager of $0 from Melissa, which would have been a good idea.)

In the end, Karen wagered $6,001, Melissa wagered $8,000, and I wagered $0 — all very reasonable bets, in my opinion. For extra credit, knowing these were the wagers we made, you had to further assume that all three of us had the same probability *p* of getting Final Jeopardy! correct, and that these three events were independent of one another. If the value of *p* was random and uniformly distributed between 0 and 1, what was my probability of winning the match?

Given these wagers, there were two ways I could have won: if all three of us whiffed on Final Jeopardy! (known as a “triple stumper”), which happened with probability (1−*p*)^{3}, or if I was the only one to get Final Jeopardy! correct, which happened with probability *p*·(1−*p*)^{2}. Adding these together gave you (1−*p*)^{2}, i.e., the probability that both Karen and Melissa were wrong, since my answer didn’t matter. Since *p* was equally likely to be any value between 0 and 1, solver Paige Kester recognized that my probability of winning was the integral of (1−*p*)^{2} with respect to *p* from *p* = 0 to *p* = 1. By symmetry, this was the same as the integral of *p*^{2} from 0 to 1, which was **1/3**. All things considered, I had a decent chance of pulling off the victory!

Congratulations to Michael Bradley of London, England, winner of last week’s Riddler Classic.

It feels like there’s more parity in college basketball’s March Madness than ever, with lower-seeded teams advancing further in the tournaments at the expense of the favorites.

For last week’s Riddler Classic, you supposed that each team was equally likely to win any given game. What were the chances that the Sweet 16 consisted of *exactly one of each seed*?

The key to this puzzle was recognizing the inherent structure of the March Madness bracket. For example, in each of the four regions, the 1 seed plays against the 16 seed in the first round, and then the winner of that game plays the winner of the 8 seed vs. the 9 seed in the second round. That meant that exactly one of those four teams (1, 16, 8 and 9) could make it to the Sweet 16 out of each of the four regions. There were 4^{4}, or 256, ways to choose which of these seeds advanced to the Sweet 16. But there were only 4!, or 24, ways to have a 1 seed in one region, a 16 seed in another, an 8 seed in another and a 9 seed in the last. Therefore, the probability of having a 1 seed, a 16 seed, an 8 seed *and* a 9 seed in the Sweet 16 was 24/256, or 3/32.

Because of the bracket’s structure, the same was true for the 5, 12, 4 and 13 seeds, the 6, 11, 3 and 14 seeds, and the 7, 10, 2 and 15 seeds. For all four of these clusterings of seeds, the probability that one of each type of seed advanced to the Sweet 16 was 3/32. And because each clustering was independent of the other, that meant the probability of having all 16 seeds represented in the Sweet 16 was (3/32)^{4}, which was **81/1,048,576**, or about 0.0077 percent.

For extra credit, you now assumed that seed *A* would defeat seed *B* with probability 0.5 + 0.033·(*B*−*A*). Again, what were the chances that the Sweet 16 consists of one of each seed?

To figure this out, let’s take a closer look at the 1 seed. To advance to the Sweet 16, it had to defeat the 16 seed in the first round, which occurred with probability 0.5 + 0.33·15, or 0.995. Then, it had to defeat either the 8 seed with probability 0.731 (the 53.3 percent of the time the 8 seed advanced to the second round) or the 9 seed with probability 0.764 (the 46.7 percent of the time the 9 seed advanced to the second round). All told, each 1 seed had a 74.27 percent chance of making it to the Sweet 16.

A similar analysis for the remaining seeds revealed that the 2 seed had a 65.47 percent chance of making it to the Sweet 16, the 3 seed had a 56.46 percent chance, and so on. As noted by solver Kiera Jones, to find the probability that each seed made it, you had to multiply all these probabilities together, but then multiply by (4!)^{4} to account for all the different ways these seeds could have come from the four regions. In the end, this probability turned out to be approximately **8.53×10**** ^{-10}**.

Parity or no parity, it will be a *very* long time until we see all seeds 1-16 represented in the Sweet 16.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

This past Wednesday I competed on “Jeopardy!” (No, this is not part of a fictional riddle. This really happened.) It was an incredible experience and a decade-long dream come true — even though I’m still haunted by the timing of that dang buzzer.

Heading into the Final Jeopardy! round, challenger Karen Morris was leading the way with $11,400. Returning champion Melissa Klapper had $8,700. Meanwhile, I was running in a not-too-distant third with $7,200. The Final Jeopardy! category was revealed as being “American Novelists,” and it was now time for all three of us to wager anywhere from $0 to the total amount we had for this one final clue.

Despite the dramatic swings in the match, my assessment was that all three of us were somewhat evenly matched in terms of knowledge. Having studied my opponents, I was also confident that Karen would wager enough money to cover the most aggressive wager from Melissa, and that Melissa would wager enough to cover my most aggressive wager.

With these assumptions, it was logical for me to keep my own wager small, since my only chance at winning was if both Karen and Melissa guessed incorrectly. Not particularly liking the category, I chose to wager $0. What was the maximum dollar amount I could have wagered without affecting my chances of winning? (Again, assume Karen wagered enough to cover Melissa and Melissa wagered enough to cover me.)

*Extra credit:* In the end, Karen wagered $6,001, Melissa wagered $8,000 and I wagered $0 (as I already said). Suppose all three of us had the same probability *p* of getting Final Jeopardy! correct, and that these three events were independent of one another. If the value of *p* is random and uniformly distributed between 0 and 1, what was my probability of winning the match?

We’ve had a number of combinatorial puzzles in recent weeks, but this submission from Jeremy Bailin was too timely to pass up:

It feels like there’s more parity in college basketball’s March Madness than ever, with lower-seeded teams advancing further in the tournaments at the expense of the favorites. This year’s Sweet 16 on the men’s side consists of two 1 seeds, two 2 seeds, two 3 seeds, two 4 seeds, a 5 seed, a 6 seed, a 7 seed, an 8 seed, a 9 seed and a 15 seed. This got Jeremy wondering about the likelihood that the Sweet 16 consists of exactly one of each seed: one 1 seed, one 2 seed, etc., up to one 16 seed.

Suppose each team is equally likely to win any given game. What are the chances that the Sweet 16 does indeed consist of one of each seed?

*Extra credit:* Looking at historical data on the men’s side, Jeremy estimates that the probability that seed *A* will defeat seed *B* is 0.5 + 0.033·(*B*−*A*). Using these probabilities, what are the chances that the Sweet 16 consists of one of each seed?

Congratulations to Henry Hannon of Arlington, Massachusetts, winner of last week’s Riddler Express.

Last week, you were asked to analyze part of a musical composition. At one point in the piece, there was an improvisational passage where musicians were instructed to repeatedly play a sequence of eight notes, labeled 1 through 8. The shortest such sequence was 12345678.

However, musicians could also revert to previous notes, replaying certain subsequences for additional flair. More specifically:

- They always had to play the next note (i.e., adding 1 to the previous note), unless they reverted to a previous note.
- At no point could they play the same note twice in a row.
- Notes 1 and 8 — the first and last notes — could be played only once.
- They could only revert to a given note at most once.
- Once they reverted to a specific note, they couldn’t then revert to an earlier note in the sequence.

The following were examples of valid sequences:

- 12345678 (This was the shortest sequence.)
- 1234567-234567-34567-4567-567-678 (This was the longest sequence.)
- 1234-234567-678
- 1234567-345-4567-5678
- 123-234567-3456-45678

I also offered examples of *invalid* sequences, for various reasons:

- 1245678 (This skipped note 3.)
- 12437568 (Some notes were out of order.)
- 12345-34678 (This skipped a note within a reversion, even though that note occurred earlier.)
- 1234-3456-345678 (This reverted to the same note twice.)
- 12345-456-2345678 (This reverted to an earlier note after reverting to a later one.)
- 12345-567-678 (This repeated a note twice in a row.)
- 123-1234567-5678 (This repeated note 1.)
- 1234-23456-5678-78 (This repeated note 8.)

How many different sequences of the eight notes were possible under these conditions?

At first, you might have been tempted to list out the valid sequences by hand. But after a hundred or two, that temptation probably wore off. (Shout out to Paige Kester, who listed all the sequences in a spreadsheet!)

The key to this puzzle was finding some way of uniquely identifying a sequence, using fewer digits to encode the entire sequence. Ideally, those fewer digits would make the task of counting the sequences more combinatorial and less manual.

For starters, you could ask whether the sequence reverts back to a given note. Does it revert back to 2, does it revert to 3, and so on, up to 7. The digits from 2 to 7 comprise six numbers, each of which could either have been reverted to or not. So you might have thought the answer was 2^{6}, or 64.

But it wasn’t. To see why, consider all the sequences that reverted back to 4 and only 4. Here were all three:

- 12345-45678
- 123456-45678
- 1234567-45678

While they all reverted back to 4, you could distinguish them based on which number came immediately before the reversion to 4. This number had to be greater than 4 but less than 8, leaving three possibilities: 5, 6 and 7.

Looking across all the digits, there were five digits that could have immediately preceded a reversion to 2, four digits that could have preceded a 3, three digits that could have preceded a 4 (as we just said), two digits that could have preceded a 5 and one digit that could have preceded a 6. So there weren’t simply two possibilities for each digit (i.e., revert vs. don’t revert) — there were different amounts of possibilities for each digit. For example, the digit 2 had six possibilities for reversion: one case in which the sequence never reverted to 2 at all, and five cases where it did and had different preceding digits.

That meant the total number of valid sequences was 6·5·4·3·2, or 6!, which was **720**. For the record, I accepted this answer.

But as it turned out, there was another interpretation of the puzzle that allowed for even more sequences. In the solution above, we assumed that the number before the reversion had to be greater than the number that was reverted to. But was that necessarily the case? Consider the sequence 1234567-23-5678. I admit this is a less aesthetically pleasing sequence — it *feels* like there should be a 4 between the second 3 and the second 5. But if you label that second 5 as a reversion, then technically this sequence satisfies all the criteria. (I never said that you had to revert to a note from the *most recent* subsequence.) Including these sorts of sequences as well, a few solvers counted up a grand total of **1,245** valid sequences — an answer I also accepted.

Congratulations to Alex Klapheke of Cambridge, Massachusetts, winner of last week’s Riddler Classic.

Brett plays poker with a large group of friends. With so many friends playing at the same time, Brett needs more than the 52 cards in a standard deck. This got Brett and his friends wondering about a deck with more than four suits.

Suppose you have a deck with more than four suits, but still 13 cards per suit. And further suppose that you’re playing a game of five-card stud — that is, each participant is dealt five cards that they can’t trade.

As the number of suits increases, the probability of each hand changes. With four suits, a straight is more likely than a full house (a three-of-a-kind and a different two-of-a-kind in the same hand). How many suits would the deck need so that a straight (not including a straight flush) is *less likely* than a full house?

For a standard deck with four suits, the probabilities for a straight and a full house aren’t exactly common knowledge, but they are readily available. Let’s briefly work through the calculations for four suits anyway, since they will be helpful when generalizing the number of suits.

To make a straight, you first had to choose the lowest card in the hand, which could have been anywhere from an ace to a 10. Then you had to assign one of the four suits to each of the five cards. In total, this gave you 10·4^{5}, or 10,240 hands. However, this also included the straight (and royal) flushes, of which there were 10 for each suit. That meant there were 10,200 straight hands with four suits. Generalizing to *N* suits, the 10·4^{5} became 10·*N*^{5}, and instead of subtracting 10·4 for straight flushes you had to subtract 10·*N*. This gave you a total of 10·(*N*^{5}−*N*) straight hands with *N* suits. You could factor this polynomial a little further to get 10·*N*·(*N*^{2}+1)·(*N*+1)·(*N*−1).

Meanwhile, to make a full house, you first had to choose among the 13 numbers for the three-of-a-kind and then among the 12 remaining numbers for the two-of-a-kind. Then you had to pick the three suits represented by the three-of-a-kind (4 choose 3 such ways) and the two suits represented by the two-of-a-kind (4 choose 2 such ways). So with four suits, there were 13·12·4·6, or 3,744 full house hands. Sure enough, with four suits, a full house was much less likely than a straight, since there were almost three times fewer hands resulting in a full house. Generalizing to *N* suits again, the 13 and the 12 stayed the same, but the 4 choose 3 became *N* choose 3 and the 4 choose 2 became *N* choose 2. This gave you a total of 13·*N*^{2}·(*N*−1)^{2}·(*N*−2).

From there, to find when the probability of a full house exceeded that of a straight, you had to solve the polynomial inequality 13·*N*^{2}·(*N*−1)^{2}·(*N*−2) > 10·*N*·(*N*^{2}+1)·(*N*+1)·(*N*−1). This immediately reduced to 13·*N*·(*N*−1)·(*N*−2) > 10·(*N*^{2}+1)·(*N*+1). Expanding and simplifying this inequality gave you 3*N*^{3}−49*N*^{2}+16−10 > 0, which turned out to be true when *N* was at least 17. In other words, you needed **at least 17 suits** for a full house to be more likely than a straight. (That’s a deck with 221 cards!)

Alex, this week’s winner, went further and plotted how these probabilities changed with the number of suits. Interestingly, there’s a nonzero asymptotic limit for both a straight and full house. Can you figure out what they are?

For extra credit, instead of five-card stud, you were asked to analyze seven-card stud. This time, you were dealt seven cards, among which you had to pick the best five-card hand. Again, how many suits would the deck have needed so that a straight (not including a straight flush) was less likely than a full house?

I’ll spare you the details, as this was a rather brutal exercise in combinatorial casework. For example, when determining the probability of a straight, you had to consider whether the two extra cards were duplicates of one of the other cards, of two of the other cards, or of none of the other cards. For each case, you had to exclude flushes. Anyway, in the end, a full house was more likely than a straight when you had **at least eight suits** in the deck.

Solver Matt St. Hilaire generated a graph showing when these probabilities crossed. While the five-card stud probabilities had nonzero asymptotes, apparently the same couldn’t be said for seven-card stud.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Welcome to Pollapalooza, our weekly polling roundup.

The oldest poll I could find on Taylor Swift was from 2010. Even then, it was clear she was on her way to being a pop-culture phenomenon. When CBS News/60 Minutes/Vanity Fair asked American adults which of five musicians they would most like to have dinner with, 22 percent said Swift — more than Jay-Z, Susan Boyle and Lady Gaga, and second only to Paul McCartney. And among respondents aged 18-29, she was No. 1.

Thirteen years and several hit albums later, Swift has cemented her place as one of music’s biggest superstars. According to a recent Morning Consult poll, 53 percent of American adults identified as fans of Swift. And last weekend, she kicked off her widely anticipated — and logistically messy — Eras Tour, a retrospective revue of her musical evolution. (If anyone has a spare ticket to one of the shows this weekend in Las Vegas, DM me.)

But when you ask Swift fans which era they liked the most, you get 10 different answers. To celebrate the start of the Eras Tour, Morning Consult and another pollster, YouGov, asked people to identify their favorite Swift album, and they found plenty of disagreement. But those disagreements can teach us something valuable about polling.

Share of respondents who identified each Taylor Swift album as their favorite, according to a YouGov poll and two different versions of a Morning Consult poll

Album | YouGov | Morning Consult (all adults) | Morning Consult (avid fans) |
---|---|---|---|

Taylor Swift | 11% | 6% | 14% |

Fearless | 10 | 7 | 12 |

Fearless (Taylor’s Version) | 4 | 7 | |

Speak Now | 5 | 2 | 5 |

Red | 11 | 5 | 10 |

Red (Taylor’s Version) | 2 | 7 | |

1989 | 13 | 7 | 15 |

Reputation | 7 | 2 | 4 |

Lover | 15 | 2 | 5 |

Folklore | 4 | 1 | 4 |

Evermore | 4 | 2 | 3 |

Midnights | 8 | 2 | 7 |

In the YouGov poll, 15 percent of people who like or love at least one of Swift’s albums identified “Lover” as their favorite, followed by 13 percent of people who identified “1989” as their favorite. But in Morning Consult’s version, a plurality of self-identified “avid” Swift fans preferred “1989,” while “Lover” had only 5 percent support. Swift’s debut album, “Taylor Swift,” was second in the Morning Consult poll with 14 percent, and “Fearless” was third with 12 percent. And the results look weirder still when you look at Morning Consult’s results among all adults, not just avid fans. There, “Fearless” is tied with “1989” at No. 1 with 7 percent, followed by “Taylor Swift” at 6 percent.

These polls may seem all over the place, but there’s good reason for them to disagree. First, most of these differences are within the polls’ margins of error. Basically, whenever you poll only a small sample of a larger population, some polling error is inevitable — usually enough to explain minor differences between various polls. Since the margin of error in Morning Consult’s poll of avid fans was ±5 percentage points, the actual number of fans whose favorite album is “1989” could be anywhere from 10 percent to 20 percent. That range includes the 13 percent who picked “1989” in the YouGov poll, so the two aren’t necessarily contradictory.

Second, the population being polled matters. It’s not surprising that the entire population of American adults has different tastes from people who like or love one of Swift’s albums and avid Swift fans. The latter groups may have additional insight into some of these albums that help them appreciate them more. Political polls can disagree (without disagreeing) in the same way: A poll of adults may have different results from a poll of likely voters. So when conducting political analysis, we look at polls that are right for the context (e.g., when forecasting elections, we look at surveys of likely voters).

Third, how the pollster asks its questions is important. Look closely at these two polls, and you’ll notice that Morning Consult asked about the “Taylor’s Versions” of “Fearless” and “Red” separately from the originals, which could be affecting the toplines.^{1} If you add the two together, 19 percent of avid Swift fans prefer one of the two versions of “Fearless,” and 17 percent prefer one of the versions of “Red.” That’s more than the 15 percent who preferred “1989” (though again, still within the margin of error)!

Similarly, the headlines of political polls can hinge — sometimes unfairly — on a pollster’s choices. For example, in 2018, Quinnipiac University found that 32 percent of voters found Democrats in Congress responsible for the recent government shutdown, 31 percent found then-President Donald Trump responsible, and 18 percent found Republicans in Congress responsible. Some of the news coverage of the poll focused on the fact that a plurality of voters blamed Democrats. But as this column pointed out, when you totaled the results by party, voters blamed Republican politicians over Democratic ones, 49 percent to 32 percent.

When poring over polls, it’s important to bear these guidelines in mind and not jump to conclusions. Of course, that goes equally for something serious like predicting the next presidential election and something fun like determining a music legend’s most popular album. (And by the way, the answer should obviously be “1989.”)

- Morning Consult’s demographic breakdown of Swift fans is worth a deep dive too. Millennials and women are her “base,” so to speak, with 58 percent of the former and 56 percent of the latter identifying as Swift fans. Interestingly, 62 percent of Democrats are Swift fans, but only 48 percent of Republicans are. That’s a shift from that 2010 CBS News survey when Republicans were much more likely than Democrats to say they wanted to have dinner with Swift. That could reflect Swift’s evolution from a country artist to an outspokenly liberal pop star.
- Back to politics: According to Ipsos/Reuters, 54 percent of Americans think Trump’s potential indictment in connection with an alleged hush money payment to porn actor Stormy Daniels is politically motivated. In comparison, 38 percent think it isn’t. However, Americans weren’t necessarily prepared to jump into action. Seventy-seven percent of adults said they would do nothing if he was arrested, while 6 percent said they would protest, 6 percent said they would donate to his legal defense fund, and 4 percent even said they would take up arms. But this is probably an example of expressive responding — people responding emotionally to a poll question without literally meaning it. Some studies have found that polls can overestimate the number of people willing to engage in political violence.
- Florida Gov. Ron DeSantis, a likely Republican presidential candidate, recently stirred up some intraparty dissent when he expressed skepticism about aiding Ukraine in its war against Russia. Turns out, the disagreement among Republican elites on foreign policy also extends to voters — but most of them are on DeSantis’s side. According to a Morning Consult poll conducted last week, 46 percent of potential GOP primary voters thought supporting Ukraine is not a vital U.S. interest, while 37 percent thought it is. That’s in keeping with a long-term shift among Republicans toward isolationism.
- President Biden recently upset progressives by approving a new oil-drilling project in Alaska, but a Morning Consult poll shows that more American adults approve of it than disapprove, 48 percent to 27 percent. Interestingly, support for the project is roughly equivalent among Republicans (54 percent) and Democrats (48 percent). However, news of the project doesn’t seem to have reached everyone; 25 percent of adults had no opinion to share.
- Eight states have passed legislation banning gender-affirming care for children under 18, and several more are considering doing so. However, American adults oppose such bills, 53 percent to 41 percent, according to a new Selzer and Co. poll for Grinnell College.

According to FiveThirtyEight’s presidential approval tracker,^{2} 42.7 percent of Americans approve of the job Biden is doing as president, while 52.8 percent disapprove (a net approval rating of -10.1 points). At this time last week, 43.7 percent approved and 51.5 percent disapproved (a net approval rating of -7.8 points). One month ago, Biden had an approval rating of 43.2 percent and a disapproval rating of 51.7 percent, for a net approval rating of -8.5 points.

^{1} and you may get a shoutout in the next column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter or send me an email.

From composer Grant Harville comes a musical mystery:

Grant is writing a musical composition. At one point in the piece, there’s an improvisational passage where musicians are instructed to repeatedly play a sequence of eight notes, which we can label as 1 through 8. The shortest such sequence is 12345678.

However, musicians can also revert to previous notes, replaying certain subsequences for additional flair. More specifically:

- They must always play the next note (i.e., adding 1 to the previous note), unless they revert to a previous note.
- At no point can they play the same note twice in a row.
- Notes 1 and 8 — that is, the first and last notes — can be played only once.
- They can only revert to a given note at most once.
- Once they have reverted to a specific note, they cannot then revert to an earlier note in the sequence.

That’s a whole bunch of rules! To make this clearer, it may be helpful to see some examples. The following are examples of valid sequences:

- 12345678 (This is the shortest sequence.)
- 1234567-234567-34567-4567-567-678 (This is the longest sequence.)
- 1234-234567-678
- 1234567-345-4567-5678
- 123-234567-3456-45678

Meanwhile, here are examples of *invalid* sequences, for various reasons:

- 1245678 (This skips the 3.)
- 12437568 (Some notes are out of order.)
- 12345-34678 (This skips a note within a reversion, even though that note occurs earlier.)
- 1234-3456-345678 (This reverts to the same note twice.)
- 12345-456-2345678 (This reverts to an earlier note after reverting to a later one.)
- 12345-567-678 (This repeats a note twice in a row.)
- 123-1234567-5678 (This repeats note 1.)
- 1234-23456-5678-78 (This repeats note 8.)

How many different sequences of the eight notes are possible under these conditions?

From Brett Humphreys comes a card-counting conundrum:

Brett plays poker with a large group of friends. With so many friends playing at the same time, Brett needs more than the 52 cards in a standard deck. This got Brett and his friends wondering about a deck with more than four suits.

Suppose you have a deck with more than four suits, but still 13 cards per suit. And further suppose that you’re playing a game of five-card stud — that is, each participant is dealt five cards that they can’t trade.

As the number of suits increases, the probability of each hand changes. With four suits, a straight is more likely than a full house (a three-of-a-kind and a different two-of-a-kind in the same hand). How many suits would the deck need so that a straight (not including a straight flush) is *less likely* than a full house?

*Extra credit:* Instead of five-card stud, suppose you’re playing seven-card stud. You are dealt seven cards, among which you pick the best five-card hand. How many suits would the deck need so that a straight (not including a straight flush) is *less likely* than a full house?

Congratulations to Kris Adams of Bartonville, Illinois, winner of last week’s Riddler Express.

Last week, Bill had four opaque bags, each of which had three marbles inside. Three of the bags contained two white marbles and one red marble, while the last bag contained three white marbles. The bags were otherwise indistinguishable.

Ted watched as Bill randomly selected a bag and reached in without looking to grab two marbles without replacement. It so happened that both marbles were white. Bill was about to reach in and grab the last marble in that bag.

What was the probability that this marble was red?

As with other famous riddles related to conditional probability (like Monty Hall and the two child problem), your intuition could lead you astray here.

Some readers observed that, after removing two white marbles from among the 12 total marbles, Bill was left with three red marbles out of a total of 10. Therefore, the probability the last marble was red should have been 3/10. However, this was *not* the right answer.

Other readers argued that because *all three* bags had two white marbles, drawing two white marbles offered no new information about which bag Bill had selected. Because three of the four bags had a red marble, the probability the last marble was red should have been 3/4. However, this too was *not* the right answer.

To see why, suppose Ted randomly selected two balls from a bag with a red marble. In this case, he had a two-thirds chance of picking one red and one white marble, as well as a one-third chance of picking two white marbles. But for the remaining bag with three white marbles, Bill was *guaranteed* to choose two white marbles.

That meant Bill was three times more likely to pick two white marbles from the bag without a red marble than he was from each bag with a red marble. At the same time, there were three times as many bags with a red marble as there were bags without a red marble. And so the final marble was equally likely to be red or white; the probability that was red was **50 percent**.

If you’re still not convinced, you can simulate this for yourself at home. Set up the four bags, pick a random bag and then draw two marbles. But remember, if you happen to draw one red and one white marble, then you should discard that simulation. Only when you draw two white marbles is there a 50 percent chance that the last marble is white.

Congratulations to Tom Singer of Melbourne, Florida, winner of last week’s Riddler Classic.

Last week you decided to set up a marble race course. No Teflon was spared, resulting in a track that was effectively frictionless.

The start and end of the track were 1 meter apart, and both positions were 10 centimeters off the floor. It was up to you to design a speedy track. But the track always had to be at floor level or higher.

What was the fastest track you could design, and how long would it have taken the marble to complete the course?

From an introductory physics course, you know that the lower down the marble was, the less potential energy it had and the more kinetic energy it had, and thus the faster it moved. So one track design was to have the marble go straight down, at which point an infinitesimal lip redirected the marble horizontally along the floor. Once it was directly below the finish line, another infinitesimal lip redirected the marble straight up.

How long did it take for the marble to traverse such a course? If the initial descent (and, symmetrically, the final ascent) took *t* seconds, then you knew *h* = *gt*^{2}/2, where *h* was the initial height of the marble (0.1 meters) and *g* was the acceleration due to gravity, approximately 9.8 m/s^{2} at Earth’s surface. Solving this equation gave you *t* = 1/7 s. Meanwhile, the marble’s velocity along the floor was equal to the square root of 2*gh*, or 1.4 m/s. Traversing the floor at this speed took 5/7 s. Adding up all these times meant the marble reached the finish after precisely 1 second. But it was possible to get there *even faster*.

Last week I had said this puzzle was likely to “break your brachistochrone,” with that last word being the operative one. A brachistochrone is a path that takes an object from one place to another in minimal time using the force of gravity. But this exact path remained a mystery until it was solved by several big names in mathematics in the late 17th century. The brachistochrone turned out to be a segment of a cycloid (the path outlined by a single point on a rolling circle). Having the marble travel along a cycloid is faster than the aforementioned straight drop down. While a straight drop gets the marble to its maximum speed faster, the cycloid reduces the overall time by moving the marble closer to its destination as it accelerates.

Of course, this being The Riddler, the optimal path was not merely a cycloid. To travel a distance of 1 meter without any net change in elevation, a cycloid would have to dip 1/𝜋 m, or about 31.83 cm. This was impossible, as the puzzle stated the marble could not pass through the floorboards 10 cm below the starting point.

The solution was therefore to get the marble reasonably far along via a downward cycloid, then travel horizontally at high speed and finally return back up to the finish line along an upward cycloid. As noted by solvers Paige Kester and Laurent Lessard, having half a complete cycloid period (also known as a tautochrone) on either end, as shown below, did the trick. Solver Starvind even animated the marble along the track.

The time to traverse either tautochrone was 𝜋√(0.05/9.8), or about 0.2244 s. Once again, the marble again traveled at a speed of 1.4 m/s along the flat portion of the track, which was now 1−𝜋/10 m long. Adding up the times for the flat portion and the two tautochrones gave a total time of approximately **0.9387 s**, which was indeed faster than the track with the initial straight drop. (Solvers who used different values of *g* got slightly different answers.)

In the end, you literally had to “break your brachistochrone” into two parts — separated by a flat track, of course.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>There was nothing overtly biased about the way the Wilkes-Barre Township Police Department described a mugging on its Facebook page in February 2019. The first post simply described a Black suspect who was alleged to have threatened a victim with a gun and demanded cash in this small community in northeastern Pennsylvania. Two later Facebook posts about the case congratulated the police on catching the suspect.

But two years before, when a white man had robbed a gas station at gunpoint and fled the scene, the police department’s social media response was completely different. There was no mention of the case on social media at all, according to John Rappaport, a professor of law at the University of Chicago who is part of a team studying racial bias in law enforcement social media accounts. Not before the suspect was arrested, to warn the public and seek their help in an arrest. And not after, to reassure the community that the suspect had been caught. “The crimes are quite similar,” Rappaport said. “[It undermines] any notion that crime severity is straightforwardly driving the department’s posting decisions.”

This is just one example of a larger pattern of bias that Rappaport’s team found when they analyzed nearly 14,000 Facebook pages maintained by law enforcement agencies across the United States. They found that police Facebook pages consistently overreport crimes by Black suspects relative to local arrests rates: Between 2010 and 2019, Black suspects were described in 32 percent of posts but represented just 20 percent of arrestees. It mirrors statistics that show white Americans overestimate the percentage of crimes committed by Black Americans by as much 20 to 30 percent compared to the actual figures (numbers that, themselves, already reflect a bias in who gets arrested versus who actually commits crimes).

And scientists say it’s reasonable to suspect those two sets of statistics are connected to one another. “We really framed the paper as being less about ‘are police departments behaving well or badly,’ and more about the perspective of the reader,” Rappaport said. That’s because these biased accounts are likely part of feedback loops, reflecting bigger issues in society as police both respond to — and perpetuate — the myths white Americans already believe.

Wilkes-Barre isn’t uniquely problematic, Rappaport said. And not all of the law enforcement agencies his team looked at engaged in biased posting. But the totality of the data showed clear patterns that extended nationwide. Only a few areas didn’t overrepresent Black suspects, relative to actual arrests, including part of the Black Belt region of the South, where Black people make up the majority of the overall population.

The racial disparity in posts compared to arrests differed by type of crime, but was present across a variety of serious offenses. Car theft, for example, had the smallest disparity: There was less than a percentage point of difference between the percent of local auto thefts involving Black suspects and the percent of Facebook posts about auto thefts involving Black suspects. But the differences were much larger with other crimes. While Black suspects made up 22 percent of all theft arrests, 32 percent of Facebook posts about thefts involved Black suspects.^{1}

Overall, Black people’s involvement in violent crimes was being overreported by law enforcement Facebook pages by 11 percentage points and involvement in property crime was being overreported by 8 percentage points.

These differences may seem small, but Rappaport and outside researchers said the impacts of being exposed to these disparities can be wide reaching. I spoke with three other scientists, unaffiliated with Rappaport’s research, who also study American beliefs about race and crime. They all told me this paper is representative of larger, systemic issues with how race, crime and punishment are viewed in this country.

At a time when relationships between traditional media, like newspapers, and police have become strained, social media allows law enforcement to regain more control over narratives of crime, said Sarah Britto, a professor of criminal justice administration at California State University, Dominguez Hills. In decades past, researchers found evidence of traditional media overrepresenting Black people as perpetrators of crime and under-representing crimes committed by white suspects. That’s changed — newer research suggests Black Americans are now underrepresented as both suspects and victims of crime in cable and network news.

But back when researchers were finding clear bias in traditional media, research also suggested that portraying Black people as criminals had an impact on how viewers thought about criminals and criminality. For example, a study in the late 1990s exposed Los Angelenos to a news report in which an alleged perpetrator was identified as Black, white or without identifying information. It found that, when the perpetrator was Black, white viewers’ support for punitive laws increased by 6 percent — while that support only increased by 1 percent when the perpetrator in the news story was white.^{2}

But what the public already believes about race and crime could also be shaping what police post. Rappaport’s study opens up a whole new direction in research, said Tony Cheng, a professor of criminology at the University of California, Irvine. One of the things Cheng said he’d like to see studied in the future is the motivations and practices within police departments that create racial disparities in social media posts. He suspects that the nature of social media incentivizes police to seek traffic and “likes” as much as any other group or individual who is trying to build an audience. If a post produces a lot of engagement, the posters are likely to try to repeat the success with similar content. But that becomes a problem if the most popular posts are all about Black people committing crimes, Cheng said.

The irony here is that public communication through social media channels is often lauded as part of the best practices that improve transparency in policing, he told me. “[This shows us that] those very practices can be exacerbating racial biases in public information and crime information in ways that we wouldn't even really think about,” Cheng said.

And those biases are powerful, said Nicholas Valentino, a professor of political science at the University of Michigan. There’s lots of research showing that, the more Americans perceive poor people to be overwhelmingly Black, the less support they have for social welfare policies aimed at helping the poor, he said. It makes sense that portraying Black people as more likely to be arrested for a crime than they actually are would have a similar impact on how Americans view crime policy.

“That's not a controversial thing to say,” Valentino said. “What's interesting here is that this is neither new, nor even unique to this domain of political communication. It's widespread, and we've known about it for maybe 30 or 40 years.”

]]>In the second half of 2021, housing prices rose faster in Florida than in any other state. In some cities, rents soared by as much as 30 percent that summer. This put an enormous strain on families living in Florida and already struggling to pay rent, especially those who worked in the tourism industry for moderate wages, many of whom lost income during lockdowns of 2020. Evictions began to rise after a pandemic-related moratorium ended that summer.

Cynthia Laurent, a housing justice coordinator for the political advocacy group Florida Rising, said she heard from people struggling all over the state. In response, her organization worked with others to launch a campaign for rent-stabilization laws in the most affected cities. In Orange County, which includes Orlando, voters passed a referendum to establish rent stabilization for certain apartments for one year, keeping residents in place while the markets adjusted and families found stable footing. For many, this is exactly how local government is supposed to work: A need arises, and people put pressure on local officials or vote to change their local laws. “I believe it was the most popular item on the ballot,” Laurent said. “It wasn’t Democrat or Republican, folks from all walks of life, party, class voted yes for rent stabilization.”

But Orange County’s rent-stabilization ordinance will likely never go into effect, thanks to preemption — a type of law that lets states stop cities from setting their own agendas.

Preemption is an old, broadly used tool, and in the past decade, preemption bills have passed across the country, blocking local legislation on everything from culture-war issues to basic city governance. In Florida, a state Senate bill passed last week would prevent local governments from enacting rent control or rent stabilization. This year, other states are considering laws revoking local authority over school curriculum and punishing local district attorneys who don’t prioritize laws passed by the state legislature. Other states are threatening to take over whole chunks of city government. And there may not be much cities can do about it.

The tug of war between state and local power is an old one. Local governments, whose responsibilities are not outlined in the U.S. Constitution, have different levels of authority depending on the state, and it’s not always clear exactly what authorities localities have. “It is very much a gray zone,” said Christine Baker-Smith, a research director at the National League of Cities. “The only place where it’s clearly not a gray zone is when there is clear, clear guidance around a certain policy area.”

What has happened in the past decade is what many experts call a shift from “minimalist” preemption to “maximalist” preemption. An example of a minimalist preemption law is the minimum wage. No state can have a minimum wage that’s lower than the $7.25 set by the federal government,^{3} but they can go higher, and cities and counties can pass laws that set even higher minimums than their states … as long as their state hasn’t forbidden it through preemption laws.

The shift began during President Barack Obama’s presidency. He often struggled to advance progressive goals in Congress, and Republicans made electoral gains in statehouses around the country. Partisanship also became more clearly geographical: More urban populations became more solidly Democratic than ever, while rural areas became even more Republican. With progressive priorities blocked at the state and federal levels, more liberal-leaning cities began passing ordinances on issues like gun control, higher minimum wages, sick leave and LGBTQ rights. “Urban areas can’t go to the legislature to get their voice heard,” Jocelyn Johnston, a professor at American University’s School of Public Affairs, told Pew in 2015. “so they’re going to do something in-house. That’s why this is happening. Most state legislatures are not as liberal as urban interests are.”

What’s happening now is a pushback from conservative organizations and red-state legislatures. “[A]ctivists have begun targeting local governments to create big government policy that could not survive at the state capitol,” said a 2015 op-ed in RedState, arguing states should pass preemption laws to protect businesses from excessive regulation in these cities. And by that point, states were already doing just that. A 2020 Economic Policy Institute analysis found the use of preemption was more prevalent in southern states.

In the past few years, at least 25 states have prohibited local governments from raising the minimum wage. Eighteen states bar municipalities from banning plastic bags. At least 20 states have laws that prevent cities from banning gas stoves. Oklahoma is considering a bill that would prevent cities from banning combustion engines. Forty-two states preempt local legislators from passing gun regulations.^{4}

Florida is one of 34 states that preempts many local housing laws, allowing rent stabilization only in an emergency; the bill that passed the state Senate last week would remove even that ability. The bill passed unanimously, but that was likely because the housing preemption was wrapped in a much larger bill, which includes measures to encourage mixed-use zoning and incentivize development of affordable housing. The bill’s proponents said it would help fix the housing shortage.

In many cases, these preemption laws were in direct response to cities’ actions. After widespread protests against police departments in the summer of 2020, states began preempting reforms or budget cuts to local police departments, with the governors of Florida and Georgia signing laws forbidding it.

Preemption laws keep expanding into new topic areas as well. This year, as of March 8, at least 493 preemption bills have been introduced into state legislatures around the country on a range of issues, according to the Local Solutions Support Center (LSSC), an organization that tracks certain preemption laws and advocates against them. Some of the biggest battles seem to be over LGBTQ rights and abortion, which fits a pattern, said Marissa Roy, head of the legal team at LSSC. She’s seen such bills originate with organizations like American Legislative Exchange Council and other think tanks. But preemption laws are also inspired by whatever culture wars are raging. “Pretty much any trends that you could note coming out in [the Conservative Political Action Conference] or on Fox News … you see them show up in preemption,” she said.

On abortion, the battle has turned to preempting local district attorneys from deciding how to use prosecutorial discretion. After the Supreme Court eliminated the constitutional right to abortion last summer, red states ramped up efforts to strictly limit the procedure, but some district attorneys in more urban, liberal areas pushed back, vowing not to prioritize enforcing those new laws. In Texas, lawmakers have introduced bills in the state House and Senate that would essentially require prosecutors to enforce all state laws or face penalties. Florida and Georgia are further enforcing preemption laws by penalizing local officials who don’t follow them. Florida Gov. Ron DeSantis suspended a Tampa-area state attorney after the attorney pledged not to enforce the state’s new abortion law, and DeSantis may suspend another one over a similar matter of enforcing state law. And the Georgia legislature is considering creating a commission with the power to remove prosecutors who “categorically” refuse to prosecute offenses that state law requires the prosecution of.

But local prosecutors have long had the discretion to set their own priorities, said Richard Briffault, a professor at Columbia University and an expert on preemption. “The state is saying, ‘No, you can’t do that for the hot-button issues that the state’s interested in,’” he said. “But at some point, they’re going to have to set priorities because they almost never are going to have the resources to prosecute everything, let alone the fact that some of these issues really do fly in the face of strong local preferences.”

It’s hard for cities to block these moves. Many state constitutions would come down on the side of the state, Roy said. According to her, reforms rooted in the 19th century gave local governments more authority than they’d had, but also allowed for state preemption. “The idea was that states would use preemption wisely to only ensure consistency where statewide consistency was needed,” Roy said. “Now, we’ve seen this abuse of preemption … and that is the exception that state legislatures have taken advantage of.”

Preemption of preemptions would need to come from changes to the state constitutions, or to fight for authority in new, specific policy areas, Baker-Smith said. And Roy added that maybe states could change some of these laws through ballot initiatives or the legislatures themselves, which she believed they’re unlikely to do. In Oklahoma, two Democrats in the state House of Representatives have introduced two separate pieces of legislation to repeal some of the state’s preemption laws, but they face an uphill battle in the Republican-dominated Legislature.

The reality is that in the U.S. today, cities are more likely to be Democratic and progressive, even when they’re in red states. When they pass more liberal laws, it can be too tempting of a target for Republicans in the state’s legislature. “They can strongly come out, whether it’s in their campaigns or whatever the case may be, and say, ‘Oh, I was against raising the minimum wage. And here’s how I stopped it,’ or, ‘Here’s how I stopped drag shows in our community,’” said Oklahoma Rep. Cyndi Munson, a Democrat who introduced one of the bills.

For those who oppose what they call its overuse, preemption undermines the basic idea behind local governance — that communities get to set priorities that reflect their own values. Laurent said that preemption laws have a longer-term, corrosive effect on local participation. State legislatures are often influenced by special interests, she said, and preempting local action removes a tool people have to fight against that. “The entire purpose of having representatives is for folks to go up there and reflect the needs that your community has,” she said. “But unfortunately, that’s being silenced.”

]]>