Counter-Counter-Counter-Neoreaction

While our modern neoreactionaries didn’t originate their line of thought, and anti-enlightenment thought is approximately five minutes younger than the enlightenment, I think that ultimately, most of the problems they see in modern society stem don’t come from democracy or ‘demotism’; they came about because Germany built the Autobahn.

Naturally, there are a couple steps in this chain of logic. And it’s possible I’m misunderstanding NRX thought. But if you’ll bear with me for a moment:

It’s a common claim in NRX circles that we have lost social technology that was making our lives far better for a very long time, and that if we could replace them, we could get ‘more good stuff and less bad stuff’, generally accompanied by some variation on this graph:

graph of homicide rates, prison rates, and their sum; the sum rises sharply from 1960 to 1990
A graph of homicide rates + prison rates, originally from Steve Sailer.

To be fair, this version of the graph raises some questions about the usual explanation, and looks more like prison-unresponsive long-cycle variation in the crime rate than a breakdown of social technology. There are a lot of confounders, so I’m willing to basically accept the premise (somewhat for the sake of argument): we have a very high imprisonment rate now, just to get almost as good lack-of-crime as we used to get with much lower punishment, and the reason we had it then was better social tools for discouraging crime.

When did that break down? Well, roughly the 60s. In the wake of the pill? True, but also in the wake of the Interstate Highway System. We had very long experience iterating on social systems that worked to punish defectors and encourage good behavior on the scale of a town, and despite the changing nature of the city, on the neighborhood level they worked pretty well, too. What the critical parts were, I’m not sure. Perhaps religion, perhaps sexist social roles; the only confident claim I’ll make is that the system of public shame and reputation were important. What’s critical here is that they broke down in the face of easy cross-country transportation. You could outrun your bad reputation – Kerouac’s On The Road is a description of this outrunning process at work. (Scott Alexander drew attention to this aspect of the story, h/t Scott.)

And the Interstate was basically inevitable, once Eisenhower saw the Autobahn at work. And Eisenhower may have been an unusually good wartime logistician (tangent: why isn’t there a word for that profession?), but not an extraordinary one; what he saw soon, someone else would have seen soon enough. Which gets to my point: the Autobahn was built for war, but by existing and being used, it ensured the mobility that would break down the assumptions underlying our effective social technology.

“AH HAH!” says my hypothetical interlocutor, “But you have tracked this back to another dangerous demotist society, the Nazis!”. This is of course true. But was the Autobahn a fascist idea? Would the Prussian monarchy that united Germany under Bismarck’s hand, or much earlier encouraged the development of the post roads along with the House of Thurn Und Taxis, really have turned down the possibility of massive strategic mobility it offered? No, this was a pure power move; if the military moved faster on well-paved roads, it was going to be built.

So who is to blame for that, then? The inventor of the tank? Henry Ford, for making the car mass-produced enough to be a mass weapon of war? The inventor of the diesel engine? In my view, it’s inseparable; the timing could have been different, but from the moment the steam engine was invented, the car, the transportation network, and the breakdown of the locality-based social system were just a matter of time.

Now, one of the traditional NRX reactions is to say that yes, this is all terrible, and we should therefore go back to monarchy. But short of destroying all the products of the Industrial Revolution, we can’t actually reverse the trends. We do need better social systems, but the old ones could at best be a short-term patch that broke down as quickly as The New Republic when it met The Festival.

Which still presents a problem: How can we recover the social benefits of robust localized reputation and societal expectations without sacrificing the economic, health, and non-bigotry benefits our development has given us or setting ourselves up in a perpetual war against the free flow of information? Here are a couple ideas:

  • Futarchy-like reputation markets. Make influencing reputation anonymous but costly; if they defy your predictions, you lose significant reputation yourself, as well as the numeric currency that would be credibility. (Naturally this will draw some comparisons to Whuffie.)
  • Enclaves a la Scott Alexander’s Archipelago, where different local-scale societies lower the physical cost of movement but increase the social cost of adjusting to a new framework; presumably the Jackson’s Wholes would be outcompeted, but perhaps we’d end up living in some variant of neo-Hong Kong. (Which could be Buck Godot’s New Hong Kong.or Hiro Protagonist’s Greater Hong Kong.. Hong Kong is probably over-emphasized here but hey, maybe it was actually that great.)
  • Narrow-outward approach to enclaves: Build one culture that works and try to make it good enough to be worth keeping social credit in. I call my personal attempt at this project “The Bardic Conspiracy”. Presumably if this works others will try the same, and in the meantime us in our silo have gotten some of the benefits back.
  • Federalism with social teeth: the USA has lots of subunits and many of them have real regional identities, which are not trivial to leave. Can we find a way to amplify that? Maybe split them up even smaller? (Certainly, e.g., California has several subunits with separate regional identities: SoCal, NorCal, and inland. Even New York has ‘NYC’ and ‘Upstate’.)

American Water Policy Is Asinine

(This post contains many puns, most totally unintended. It turns out lots of economic terms about price movement are about water. My apologies, but not enough apologies to get rid of them.)

Libertarians have a rule of thumb: “Anywhere you find a shortage, look back and you’ll find price controls causing it.”It’s almost always true; maybe actually always. What does that have to do with anything? Bear with me a moment.

If you enjoy laughing at libertarians who are self-parodies, read this article. It is a heavy dose of First World Problems blown out of proportion. And yet? He’s got a point.

In the United States, we introduce costly regulations for consumer water use that reduce it by respectable but not large factor; cut 30% or so, optimistically. And all consumer water use is only 2% of the total.

In California, billboards in every city remind you “Conserve Water! It’s a Drought!”. Meanwhile, almond farms use far more water, and alfalfa farms use tons to grow plants that aren’t particularly well-suited to California weather and immediately get shipped overseas when harvested. If we were managing this effectively, there would not be alfalfa farms in formerly-arid parts of California.

In Flint, Michigan, the residents have to buy bottled water at exorbitant expense because their pipes can’t deliver it to them. Meanwhile, nearby, Nestle makes a killing buying water rights and packaging it as bottled water.

These have something in common: Water is being poorly allocated. The residents of Flint, and many other American cities, value clean water far more highly than Nestle does, but Nestle can negotiate for the water rights more effectively and the populace can’t even attempt to outbid them. California urges a small fraction of the water consumption to be cut, because farms are pumping water to dry areas that don’t make sense to farm in to grow crops better suited to other places.

The common thread is a fixed price of water from the pipes that doesn’t respond to supply and demand. The water the farms use is worth more than the farms are paying, but it is going to suddenly run out and then not be available at that price any longer. If the system was stable, the price would rise as projected supply (i.e. rainfall) dropped. In Flint, the managers of the water supply could only make a fixed profit from selling it through normal channels, but negotiating the rights to Nestle had no such barriers and it could float to a market price.

Of course, these restrictions on price exist for a reason, and the goal is good: Everyone should have enough clean water to drink. But in Flint in 2016, and California in ~2020, this isn’t likely to happen. The problem with a price control is that it affects everyone; if the price of water is the same for the consumer drinking it and the farm watering alfalfa, you can’t subsidize one without subsidizing the other. When >90% of the water consumption is now being subsidized for the sake of the other <10%, many enterprises will show up which would be wildly unprofitable if they had to pay what the water was worth but can thrive on the cheap supply of water. When bottled water, even in bulk, is so much more expensive than tap water, companies will turn one to the other and net a hefty profit, and consumers whose tap fails have their water costs skyrocket.

Consider an alternative: Government subsidy of drinking water only. Provide a large tax credit for consumer water expenses (personal use only), perhaps 90% of the cost, and let the prices float free. The water company can raise rates until they have supply managed, and if the citizenry suddenly needs to switch to bottled water they won’t have such a massive jump in cost.

A General Theory of Bigness and Badness

So, a glib take you’ve probably heard is that the problem with Big Government, Big Business, Big Etc. is not the government or the business or the etc. but the “Big”. This is extremely superficial and is essentially elevating a trivial idiosyncrasy of the English language to an important structural principle of the universe, which makes about as much sense as nominative determinism. I think it’s true anyway. Here is my theory of why:

Large groups of people are increasingly hard to coordinate. Getting a group of one person to be value-aligned with itself is literally trivial, 4 people is easy, 12 people is doable for fairly complex values, 50 gets difficult, etc. For a very large organization getting the whole org focused on a complex, nuanced goal is basically impossible.

So the larger an organization gets, the more its de facto goals become simplified, even if it keeps paying lip service to nuanced goals. Theoretically it should be possible to keep nuanced goals at a large scale, but it would take more and more effort per person as you get bigger, and I suspect it would reach “you must spend 110% of your time working on tasks to keep yourself value-aligned”, i.e. impossible-in-practice, somewhere in the 150-1000 employees range.

So that’s part one of the theory: goals get simplified and nuance disappears as the organizations get bigger. By itself this is sufficient to strongly suggest that big organizations are bad in and of themselves. But there are some corollaries which make the case stronger.

Corollary the first: Flatness of hierarchy does matter. The deeper the hierarchy, the more large sub-organizations exist within the large parent organization. The same forces that push the parent org to have a simple goal push each sub-org to have a simple goal. This can explain the rampant bureaucratic infighting in large hierarchical organizations; each sub-org is following its default goal and those come into conflict. This is approximately the Hanlon’s Razor (“Never ascribe to malice that which is adequately explained by incompetence”) analog of  The Gervais Principle.

Corollary the second: “Big” may not imply “evil” but it does forbid “good”. Only simple goals are sustainable for large orgs. But not all simple goals are equally “reproductively fit”. For for-profit companies the most reproductively fit goal is “make a profit”. For political parties it’s “get (re)elected”. For bureaucracies it’s “maintain/expand our budget”. For charities…probably it’s “keep our incoming donations high”, but I’m not confident in that. The bigger the organization, the harder it is for well-intentioned members, even well-intentioned leaders (CEO Larry Page and and President Sergey Brin, President Barack Obama, Chairman of the Joint Chiefs of Staff Colin Powell, …) to keep the organization out of the “low-energy well” that is the default self-perpetuation goal. Organizations which are kept on task for an unselfish core goal will do poorly relative to their peers and tend to die out.

In summary: No manager could possibly keep a large organization on target for a complex goal, and attempting to keep a large org on target for a simple but unselfish goal will rapidly kill the organization. This applies fractally to sub-organizations and super-organizations.

Does this teach us lessons about what to do? Well, it cautions us against trusting that large organizations consisting of benevolent people will act benevolently. It makes me somewhat more skeptical of OpenAI. But nothing specific, no.

@docstrings: You have no class.

If you have written any Python code in a shared project recently, you have probably seen a documentation convention like this:

def complex(real=0.0, imag=0.0):
  """Form a complex number.

  @param real: The real part (default 0.0)
  @param imag: The imaginary part (default 0.0)

  @returns: ComplexNumber object.
  """
  if imag == 0.0 and real == 0.0: return complex_zero
  ...

This is a good and useful convention for explaining things to future users of the code, if a little verbose. However, you are more likely to have seen class-based code, and there it is not used very well at all. For example:

class CompetitionBasket(FruitBasket):
  """Fruit basket that is entered into a scored competition.

  @param fruits: A dict of fruit names and quantities
  @param scores: A dict of fruit names and scores-per-fruit
  """

  def __init__(self, fruits, scores):
    self.scores = scores
    super(CompetitionBasket, self).__init__(fruits)

    ...

  def score(self, relevant_fruits=[]):
  """Return the score of the basket according to the current rules.

  @param relevant_fruits: An array of fruit names corresponding to
  the fruits which are currently under consideration. Defaults to an
  empty list and scores all fruit.

  @returns: Integer score of the basket.
  """
  ...

On first glance this looks like the docstring for score follows the same principles. But in actuality this is missing important information, which in a larger class in a complex system would be critical. Both self.fruits and self.scores are critically necessary to the functioning of this method, but neither of them are mentioned. There are advantages to this approach: it is fairly easy to programmatically verify presence of non-empty docstrings for all params and return values a function possesses, and significantly harder to verify presence of docstrings for all non-trivial instance attributes used in a method or all values mutated by side-effects. There are significantly more judgement calls involved in assessing which values need a docstring and which don’t, and it’s plausible that setting the bar for “docstring required” to include these would result in that requirement being more commonly flouted for other methods.

But to consider this and stop is an instance of Goodhart’s Law. It is an argument against mandating them, not an argument against including them wherever possible. For all the reasons we want docstrings (clarity of purpose, maintainability, etc.) we should, wherever possible, include these in the docstring. In some cases, this could result in a docstring 20 lines long; which is clearly a problem. However, in those cases I propose that the main problem is that there is one method which implicitly takes more than a dozen arguments; the object-oriented design has concealed the fact that it is an unwieldy, unmaintainable method and forcing this docstring convention on it brings that fact back into the open.

I would suggest this naming convention:

class Fnord(object):
...
    def methodName(self, foos):
    """Frobozz the foos according to the Fnord's bazzes.
    @param foos: a list containing Foo instances to frobozz
    @instance_param bazzes: Baz instances containing rules for frobozzing
        for this Fnord
    @class_param quux: Number of times Fnords frobozz each foo
    """
    ...

 

Against the Virtue of Ash

At the Bay Area Winter Solstice this year, one of the major themes was something Cody Wild called “The Virtue of Ash”. (Text of Solstice 2018 can be found here.) The virtue of ash is, assuming I understand it right, the quality of enduring catastrophe and seeing your life in ruins around you and rebuilding anyway.

A friend of mine, who only recently moved to the area, was in attendance. When I asked him what he thought of Solstice, one of the first things he shared was “Holy scrupulosity triggers, Batman!”

I think these two facts are related.

On discussing it with some other friends, scrupulosity is not quite the right word. But I do believe the virtue of ash is harmful to promote, for the scrupulosity-like reasons that inspired my friend’s impression. And I feel fairly confident it is useless to cultivate.

First off, let’s set aside whether it’s useful to cultivate for the moment and consider whether it’s good to promote. Promoting the cultivation of the virtue of ash is exhorting people to consider what they’d do on the worst day of their lives, imagining the worst that could happen and trying to bend their mind to be someone who could handle that and keep going. It conveys a message that doing the best you can to make the future be bright is not enough; you should also be preparing for much darker futures where all your current plans lie in ruins, and to make those brighter. This hits at the anxious by raising the implicit standard for “doing enough” even higher. It also hits at the depressed by explicitly encouraging making highly depressing, dark outcomes mentally available.  Since the community being given this advice is already prone to anxiety, depression, and scrupulosity, this is a Bad Thing and should not be done without a clear reason to think the benefits are large. This year’s organizers may think those benefits are large, but if so I disagree.

Continue reading

I Have Seen the Tops of Clouds [Adapted]

I gave this speech in 2016 at the Bay Winter Solstice, as one of the speeches of darkness. The original is by Quinn Norton; this was revised to be shorter, to sound less like I had personally delivered all the revelations and experienced the anecdotes, and to focus less on concerns Ms. Norton has which I do not share. I was and remain proud of this edited version, and decided to make it publicly readable. If you intend to read as a speech to an audience, I suggest using my adaptation. For any other purpose, use hers. I believe only Ms. Norton herself has the right to use the work or any derivative for any commercial purpose, so do not do so or ask me for permission to do so.

I wake up in the middle of the night sometimes. I peer around my room scared and stressed, like all the things I can contain during the day break loose in dreams I can’t remember, the echoes of all these forgotten nightmares roaming around my body. Sometimes I want to cry, or curl up, or scream. I stare into the corners of my room. I try to fall back to sleep, even though I don’t want to go back to whatever sent me here.

It’s not a coincidence that I’m told I’m depressing. I think about depressing things.
I try to face the worst things about humanity and our situation. It started with how the oceans are dying, but since then moved on to genocide, imprisonment, the history of labor exploitation, computer security and mass surveillance, racism, and technological apocalypse.
I’m fun at parties.

It may be that our ticket was punched before we ever got started. While we’re cutting our time on earth shorter, it might be that our species was never going to make it past the end of the womb of our ice-age birth.
I explained this to a friend, about how fragile an organism we are, and how the ice ages cycle. She laughed. She was used to this strange form of hope.

“You have to choose hope, or just jump out a window,” someone said, a person who’d been accused of techno-utopianism. They were walking along the California coast at sunset, talking about all the ways our technological lives could go wrong, and the many ways they are going wrong.
They weren’t utopian, it turned out. They’d thought of the worst long before their detractors had. They’d decided to try to head it off, instead of jumping out a window.

We are diseased and angry and we kill each other and ourselves and all the world. I try to look at this, and my own part in it. Sometimes it’s overwhelming. I feel so powerless trying to comprehend all the terrible things we face, much less get past them into the future with our humanity and our inconceivably beautiful little blue-green planet preserved.

Looking at the ways we break the world, think of Tolstoy’s admonition that if we cannot give up the ills of our lives, then we should declare them, face them, put them on our flags. We can tell the world about the edge of our strength, ability, and virtue. We can share the failure honestly. This is good, and this helps, but it doesn’t bring back the vanished creatures and dying earth, and it doesn’t stop the relentless human cruelty.

There are nights full of invective and hate and days I can only see the flaws in our world, and feel my own flaws and my own fear from within.
And there is so much fear.
The land will drown. The seas could turn acid and burn us from above while starving us from within. At any moment we could still be consumed by nuclear fire, an accidental holdover from the Cold War we’ve failed to wrap up, like a binge drinker or a gambling addict who gets sober, but can’t face the past, and lets it fester.

All these grown-up monsters for my grown-up mind, they are there in the nights I wake up terrified and taunted by death. When I feel so small and broken, when despair and terror take me, I have a secret tool, a talisman against the night. I don’t use it too often so that it doesn’t lose its power. I learned it on airplanes, which are strange and thrilling and full of fear and boredom and discomfort. When I am very frightened, I look out the window and say, very quietly:

I have seen the tops of clouds

And I have.

In all the history of humanity, I am one of the few that has seen the tops of clouds. Many would have died to do so, and some did.
I have seen them many times.
I have seen the Earth from space, and spun it around like a god to see what’s on the other side. We are the only consciousness we’ve ever found that has looked deep into the infinite dark, and instead of dark, we saw galaxies. Suns and worlds without number. We have looked into our world and found atoms, atomic forces, systems that dance to the glorious music of the universe.
We have seen actual wonders that verge on the ineffable. We have coined a word for the ineffable. We have coined thousands of words for the ineffable. In our pain we find a kind of magic, in our worst and meanest specimens we find the flesh of a common human story. We are red with it.

I know mysteries that great philosophers would have died for, just to have them whispered in their dying ears. I can look them up on my smartphone.
I live in the middle of miracles, conceptions and magics easily worth many lifetimes to learn, from which I can pick and choose. I have wisdom and knowledge poured around me like a river, more than I could learn in a thousand lifetimes, and I am still alive.

It is good that I am alive, it is good that we are alive. Even if we kill ourselves off with nuclear fire, or gray goo, or drown ourselves in stinking acid oceans, it is good that we have lived, that we did all of this, and that we grew into what we are, and learned to dream of what we could be.
Perhaps we will soon die, but we will die having gone so far above our primordial ponds and primate forests that we saw the tops of clouds.

It is good that in the body of this weak and tender African animal a piece of the universe has gazed upon itself, that this tiny appendage of existence looked on everything its eyes and tools could drink in and experienced the most pure of wonder, the most terrible of awe. It is worth it, all of it, to even for a moment be the universe gazing upon itself. We reached so far above our biological fate that we spoke love to life, all life, and to its dark universal womb.

That takes away the fear for me. Not all of it, but enough so that I can hug my partner and fall asleep, to dream dreams of what we’ll do next, of how we’ll live this hope.

I can get past the horrible things we face. I can acknowledge the boring and unpleasant truths along the way. I can take up Tolstoy’s charge, and dream of a healing world where my descendants and their descendants will see wonders that I cannot now conceive.

We have seen the tops of clouds.

Social Modeling Recursion (Excerpt)

This is quoted from an explanation by Lahwran (blog), part of a larger post on LessWrong, and sourced from an original claim and example by Andrew Critch. To my knowledge, Critch has never posted it online. I found myself wanting to reference this divorced from the remainder of the post, so I have reproduced it here. None of these words are my own. (If in the distant future this is preserved while the originals are not, then my apologies, I feel the same way about several ancient Greek philosophers, but at least I’ve cleared up that I haven’t edited it.)

A RECURSION EXAMPLE

I found the claim that humans regularly social-model 5+ levels deep hard to believe at first, but Critch had an example to back it up, which I attempt to recreate here.

Fair warning, it’s a somewhat complicated example to follow, unless you imagine yourself actually there. I only share it for the purpose of arguing that this sort of thing actually can happen; if you can’t follow it, then it’s possible the point stands without it. I had to invent notation in order to make sure I got the example right, and I’m still not sure I did.

(I’m sorry this is sort of contrived. Making these examples fully natural is really really hard.)

  • You’re back in your teens, and friends with Kris and Gary. You hang out frequently and have a lot of goofy inside jokes and banter.
  • Tonight, Gary’s mom has invited you and Kris over for dinner.
  • You get to Gary’s house several hours early, but he’s still working on homework. You go upstairs and borrow his bed for a nap.
  • Later, you’re awoken by the activity as Kris arrives, and Gary’s mom shouts a greeting from the other room: “Hey, Kris! Your hair smells bad.”. Kris responds with “Yours as well.” This goes back and forth, with Gary, Kris, and Gary’s mom fluidly exchanging insults as they chat. You’re surprised – you didn’t know Kris knew Gary’s mom.
  • Later, you go downstairs to say hi. Gary’s mom says “welcome to the land of the living!” and invites you all to sit and eat.
  • Partway through eating, Kris says “Gary, you look like a slob.”
  • You feel embarrassed in front of Gary’s mom, and say “Kris, don’t be an ass.”
  • You knew they had been bantering happily earlier. If you hadn’t had an audience, you’d have just chuckled and joined in. What happened here?

If you’d like, pause for a moment and see if you can figure it out.


You, Gary, and Kris all feel comfortable bantering around each other. Clearly, Gary and Kris feel comfortable around Gary’s mom, as well. But the reason you were uncomfortable is that you know Gary’s mom thought you were asleep when Kris got there, and you hadn’t known they were cool before, so as far as Gary’s mom knows, you think she thinks kris is just being an ass. So you respond to that.

Let me try saying that again. Here’s some notation for describing it:

  • X => Y: X correctly believes Y
  • X ~> Y: X incorrectly believes Y
  • X ?? Y: X does not know Y
  • X=Y=Z=...: X and Y and Z and … are comfortable bantering

And here’s an explanation in that notation:

  • Kris=You=Gary: Kris, You, and Gary are comfortable bantering.
  • Gary=Kris=Gary's mom: Gary, Kris, and Gary’s mom are comfortable bantering.
  • You => [gary=Gary's mom=kris]: You know they’re comfortable bantering.
  • Gary's mom ~> [You ?? [gary=Gary's mom=kris]]: Gary’s mom doesn’t know you know.
  • You => [Gary's mom ~> [You ?? [gary=Gary's mom=kris]]]: You know Gary’s mom doesn’t know you know they’re comfortable bantering.

And to you in the moment, this crazy recursion just feels like a bit of anxiety, fuzzyness, and an urge to call Kris out so Gary’s mom doesn’t think you’re ok with Kris being rude.

Now, this is a somewhat unusual example. It has to be set up just right in order to get such a deep recursion. The main character’s reaction is sort of unhealthy/fake – better would have been to clarify that you overheard them bantering earlier. As far as I can tell, the primary case where things get this hairy is when there’s uncertainty. But it does actually get this deep – this is a situation pretty similar to ones I’ve found myself in before.

There’s a key thing here: when things like this happen, you react nearly immediately. You don’t need to sit and ponder, you just immediately feel embarrassed for Kris, and react right away. Even though in order to figure out explicitly what you were worried about, you would have had to think about it four levels deep.

If you ask people about this, and it takes deep recursion to figure out what’s going on, I expect you will generally get confused non-answers, such as “I just had a feeling”. I also expect that when people give confused non-answers, it is almost always because of weird recursion things happening.

In Critch’s original lightning talk, he gave this as an argument that the human social skills module is the one that just automatically gets this right. I agree with that, but I want to add: I think that that module is the same one that evaluates people for trust and tracks their needs and generally deals with imagining other people.

Exported: Blind Goaltenders: Unproductive Disagreements

(Exported: Copying posts out from Lesserwrong, since I have totally lost confidence in it.)

If you’re worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger. More often, though, you’ll be discussing it with people who disagree, at least in part.

The question that inspired this post was “Why are some forms of disagreement so much more frustrating than others?” Why do some disagreements feel like talking to a brick wall, while others are far more productive?

My answer is that some interlocutors are ‘blind goaltenders’. They not only disagree about the importance of your problem, they don’t seem to understand what it is you’re worried about. For example, take AI Safety. I believe that it’s a serious problem, most likely the Most Important Problem, and likely to be catastrophic. I can argue about it with someone who’s read a fair chunk of LessWrong or Bostrom, and they may disagree, but they will understand. Their disagreement will probably have gears. This argument may not be productive, but it won’t be frustrating.

Or I could talk to someone who doesn’t understand the complexity of value thesis or orthogonality thesis. Their position may have plenty of nuances, but they are missing a key concept about our disagreement. This argument may be just as civil – or, given my friends in the rationalsphere, more civil – but it will be much more frustrating, because they are a blind goaltender with respect to AI safety. If I’m trying to convince them, for example, not to support an effort to create an AI via a massive RL model trained on a whole datacenter, they may take into account specific criticisms, but will not be blocking the thing I care about. They can’t see the problem I’m worried about, and so they’ll be about as effective in forestalling it as a blind goalie.

Things this does not mean

  • Blind goaltenders are not always wrong. Lifelong atheists are often blind goaltenders with respect to questions of sin, faith, or other religiously-motivated behavior.
  • Blind goaltenders are not impossible to educate. Most people who understand your pet issue now were blind about it in the past, including you.
  • Blind goaltenders are not stupid. Much of the problem in AI safety is that there are a great deal of smart people working in ML who are nonetheless blind goaltenders.
  • Goaltenders who cease to be blind will not always agree with you.

Things this does mean

Part of why AI safety is such a messy fight is that, given the massive impact if the premises are true, it’s rare to understand the premises, see all the metaphorical soccer balls flying at you, and still disagree. Or at least, that’s how it seems from the perspective of someone who believes that AI safety is critical. (Certainly most people who disagree are missing critical premises.) This makes it very tempting to characterize people who are well-informed but disagree, such as non-AI EAs, as being blind to some aspect. (Tangentially, a shout-out to Paul Christiano, who I have strong disagreements with in this area but who definitely sees the problems.)

This idea can reconcile two contrasting narratives of the LessWrong community. The first is that it’s founded on one guy’s ideas and everyone believes his weird ideas. The second is that anyone you ask has a long list of their points of disagreement with Eliezer. I would replace them with the idea that LessWrong established a community which understood and could see some core premises; that AI is hard, that the world is mad, that nihil supernum. People in our community disagree, or draw different conclusions, but they understand enough of the implications of those premises to share a foundation.

This relates strongly to the intellectual turing test, and its differences with steelmanning. Someone who can pass the ITT for your position has demonstrated that they understand your position and why you hold it, and therefore are not blind to your premises. Someone who is a blind goaltender can do their best to steelman you, even with honest intentions, but they will not succeed at interpreting you charitably. The ITT is both a diagnostic for blindness and an attempt to cure it; steelmanning is merely a more lossy diagnostic.

Exported: Personal Model of Social Energy

(Exported: Copying posts out from Lesserwrong, since I have totally lost confidence in it.)

Epistemic Status: This is a model I have derived from my own experience, with a fair amount of very noisy data to back it up. It may not generalize to anyone else. However, it seems like a framework that might be useful, so I’m sharing it here.

The excessively simple model of social energy is the introvert/extravert dichotomy. Introverts lose energy from social situations, extraverts gain energy. This is then elaborated into the I/E scale, where the sign of your energy change in social situations is mapped to an integer. This is clearly more descriptive of reality, but as many have pointed out, still imperfect.

I find that for me there are separate sets of factors that determine energy gain and energy loss.

For energy gain, it is a positive-slope, negative-curvature function of the number of people present. There is energy in a room, and I do pick up some of it. (Something like sqrt(n), or possibly 10-10/n

For energy loss, it is a function of how much I trust the person in the room I trust least; f(min(trust(p) for p in room)). This grows much faster than the number of people present. Trust also seems to be a function of my pre-existing mood (that part I expect won’t generalize).

Naively, I would have expected this to be a weighted average of my trust of people in the room, where five people I trust very much and one I trust very little would feel very different from five I trust somewhat and one I trust very little. I have difficulty arranging that test, but preliminary data suggests that expectation was wrong; one person who I cannot relax around spoils things thoroughly. (‘Trust’, here, is very much a System 1 thing; feeling safe/open around someone, rather than feeling/thinking that they are trustworthy/upstanding/honest.)

The predictions made by this model are that you should choose your social gatherings carefully, even if extroverted, as the benefits of size can be wiped out by one or two poorly-chosen guests.

More broadly, I think that considering gain and loss separately will clarify the feelings toward socializing of many self-identified introverts. Since it seems quite plausible that ‘true introverts’ who never get energy from social interaction are not actually a thing, I expect this would help improve the day to day lives of many.