[This is published unfinished and nearly a year after reading the book, prompted by another reader’s post on the same topic.]
Ha! I’m back. And I’ve read another book. Thomas Schelling’s The Strategy of Conflict. In this book, he produced a rigorous grounding for the Cold War Balance of Terror, described how weakness in bargaining is often strength, and along the way introduced both the Schelling Point and the Schelling Fence (though, not having an ego the size of John Von Neumann, he did not call them by those names).
Setting the stage for the book: Thomas Schelling observed that game theory existed (thanks, Von Neumann) and that Mutual Assured Destruction existed (again, thanks John) but that they didn’t really seem to be strongly tied from a theoretical standpoint; the theory underlying them took as an axiom that war is inherently zero-sum, which he considered be ill-founded.
He decided to construct an extension of game theory that dealt with variable-sum games and implicit communication; what rules arise in games and negotiations when the participants – by fiat or practicalities – can’t communicate, and how this affects optimal strategies. Since nations at war find it understandably difficult to communicate reliably and believably, this seemed potentially important, and due to the streetlight effect, having a theory for negative-sum, variable-sum games was important to making the situation stable.
He was also a big influence on Kennedy during the Berlin Airlift and Cuban Missile Crisis, and was influential in getting the hotline created.
While it’s still historically interesting, it’s lost some relevance since we now care almost as much about loose nukes being used by nonstate actors as about states launching nuclear wars. The theory has broader applications, though, and the underlying principles are fascinating.
The basic idea that underlies the whole book is that, in rationalist terms, the shared map can be as important as the territory or even more so. Because human minds tend to settle on the same things as the “most natural” option, and this is common knowledge, communication can happen implicitly. And this is more and more likely the more culture is shared between participants; an alien or de novo AI might share very little, but two men from the same town in Iowa will share quite a lot. The simplest way to express this is a Schelling Point in cooperative games. For example, imagine you and another paratrooper drop and land at points X and Y on this map:
You don’t know where each other landed and cannot communicate by radio. You do have copies of the map. Where do you meet?
While you think a bit, consider another example. If you’re going to meet someone in Times Square, New York City tomorrow, but you can’t communicate and forgot to pick a time, when do you meet? You could pick dawn, dusk, noon, midnight, or many other times. Humans will generally pick noon, though; we’re diurnal and it’s a clear single point.
That’s a fact about humans, though, not all minds. Intelligence-uplifted cats wouldn’t pick noon; they’re naturally crepuscular, awake at dawn and dusk, so noon is inconvenient and not a natural time to pick. Their coordination points are weaker, and not quite the same as we are.
But back to the map question.
If you said “the bridge”, you understand the principle. MOREMOREMOREMORE
Schelling was not principally concerned with these, though. They were important, but only as a stepping stone to his thoughts about bargaining, and how a coordination point becomes significant as a bargaining position. Consider a coordination game where two people need to pick a dollar value between $2419.37 and $2693.01. If they pick the same number they each get that much money. Naturally, they will settle on $2500.00.
For the exact same reason, if two people are haggling over a nice TV and the seller says “I can’t possibly accept anything less than$2693.01”, the buyer will think That’s totally fake. And the same if he “can’t possibly accept” less than $2419.37. But if he says he can’t sell for under $2500, that is believable in a way the other two aren’t. If he drops to $2499.95, the buyer will smell blood and push for lower, until it falls to a more natural place like $2450 or $2400. The seller knows this, and knows that the buyer knows it, and knows that the buyer knows that he knows it, and so on, and vice versa. Therefore, even if the real value that is the true lowest price with positive profit is $2419.37 and highest price the seller would pay is$2693.01, the actual result is most likely $2500 or no sale. (This concept was later rederived and labeled the Schelling Fence by Yvain/Scott Alexander.)
This has profound implications for stability of negotiations, arguments, and wars, and puts an exclamation point on the truly alien nature negotiating with a very-slightly-different type of mind would have. For example, national borders are generally stable because they are a fence; pushing a few miles into national territory and then stopping is implausible, so a border will be defended as though it failing indicates the beginning of a long, deep-striking offensive. A pair of militaries lining up on the map above would draw their battle lines at the river if they both wanted to avoid an immediate fight. The Dow Jones crossing 20,000 could be very important if the optimistic artificial milestone influences behavior. The political status quo, even if arbitrary, can be stable even if no faction is satisfied with it. And so on.
p. 163-172: experiments
somewhere: agreements depend on the shared map, not just the territory
p. 71: The Schelling Fence
p. 55, p.60-63: Schelling Games
This is the central insight, but not the only one. The largest other discussion is about commitment mechanisms and how they affect bargaining. The critical piece here is that weakness – decisions you can’t make, control given up, outcomes which are worse for you than they could be – can create a much stronger bargaining position. This is a critical piece of the study of commitments; unilaterally making your position worse or your opponent’s position stronger can get you better outcomes. Commitments also can very quickly render even simple games computationally intractable to analyze in full explicit form.
p. 153: commitment mechanisms and the explosion of options
The last piece I found interesting is a collection of insights about the nature of brinksmanship, surprise attacks, and understanding the balance of terror in terms of much more primitive negotiation tactics.
Threatening to pull the other side off a cliff only works if you can slip. You lose too, so threatening to jump isn’t credible.
p. 199-200: theory of brinksmanship
Model the conflict as a chance of unwise attack plus the ability to attack strategically with some success probability. Intuition suggests that you should update your chance of strategic attack based on the likelihood of enemy attack; the more likely they are to launch a first strike, the more likely you should be to preempt them. This would generate a feedback loop; you believe them to be more likely, so you become more likely to attack, which they know, thus they become more likely to attack, so you must as well, and it spirals to doom. This could be even faster if your chance of unwise attack depends on how threatened you are and so it rises as well.
Instead, even under the strong assumptions, unless the baseline random chance of attack is sufficient to incite a voluntary first strike right off the bat (a calculation that depends on how likely you are to succeed at an attempted one-sided war and exactly how good and bad the continued peace and MAD options look), there is no incentive to strategically start a war. Obeying the principle of ultrafinite recursion, the infinite regress vanishes and is replaced by a simple all or nothing decision.
p. 209-216: surprise attacks, anti-Petrov errors, and ultrafinite recursion
Bomb back to the stone age / Stone Age (well, medieval) tactic of exchanging hostages for good behavior.
p. 239: the balance of terror as hostage-taking