@docstrings: You have no class.

If you have written any Python code in a shared project recently, you have probably seen a documentation convention like this:

def complex(real=0.0, imag=0.0):
  """Form a complex number.

  @param real: The real part (default 0.0)
  @param imag: The imaginary part (default 0.0)

  @returns: ComplexNumber object.
  if imag == 0.0 and real == 0.0: return complex_zero

This is a good and useful convention for explaining things to future users of the code, if a little verbose. However, you are more likely to have seen class-based code, and there it is not used very well at all. For example:

class CompetitionBasket(FruitBasket):
  """Fruit basket that is entered into a scored competition.

  @param fruits: A dict of fruit names and quantities
  @param scores: A dict of fruit names and scores-per-fruit

  def __init__(self, fruits, scores):
    self.scores = scores
    super(CompetitionBasket, self).__init__(fruits)


  def score(self, relevant_fruits=[]):
  """Return the score of the basket according to the current rules.

  @param relevant_fruits: An array of fruit names corresponding to
  the fruits which are currently under consideration. Defaults to an
  empty list and scores all fruit.

  @returns: Integer score of the basket.

On first glance this looks like the docstring for score follows the same principles. But in actuality this is missing important information, which in a larger class in a complex system would be critical. Both self.fruits and self.scores are critically necessary to the functioning of this method, but neither of them are mentioned. There are advantages to this approach: it is fairly easy to programmatically verify presence of non-empty docstrings for all params and return values a function possesses, and significantly harder to verify presence of docstrings for all non-trivial instance attributes used in a method or all values mutated by side-effects. There are significantly more judgement calls involved in assessing which values need a docstring and which don’t, and it’s plausible that setting the bar for “docstring required” to include these would result in that requirement being more commonly flouted for other methods.

But to consider this and stop is an instance of Goodhart’s Law. It is an argument against mandating them, not an argument against including them wherever possible. For all the reasons we want docstrings (clarity of purpose, maintainability, etc.) we should, wherever possible, include these in the docstring. In some cases, this could result in a docstring 20 lines long; which is clearly a problem. However, in those cases I propose that the main problem is that there is one method which implicitly takes more than a dozen arguments; the object-oriented design has concealed the fact that it is an unwieldy, unmaintainable method and forcing this docstring convention on it brings that fact back into the open.

I would suggest this naming convention:

class Fnord(object):
    def methodName(self, foos):
    """Frobozz the foos according to the Fnord's bazzes.
    @param foos: a list containing Foo instances to frobozz
    @instance_param bazzes: Baz instances containing rules for frobozzing
        for this Fnord
    @class_param quux: Number of times Fnords frobozz each foo


Against the Virtue of Ash

At the Bay Area Winter Solstice this year, one of the major themes was something Cody Wild called “The Virtue of Ash”. (Text of Solstice 2018 can be found here.) The virtue of ash is, assuming I understand it right, the quality of enduring catastrophe and seeing your life in ruins around you and rebuilding anyway.

A friend of mine, who only recently moved to the area, was in attendance. When I asked him what he thought of Solstice, one of the first things he shared was “Holy scrupulosity triggers, Batman!”

I think these two facts are related.

On discussing it with some other friends, scrupulosity is not quite the right word. But I do believe the virtue of ash is harmful to promote, for the scrupulosity-like reasons that inspired my friend’s impression. And I feel fairly confident it is useless to cultivate.

First off, let’s set aside whether it’s useful to cultivate for the moment and consider whether it’s good to promote. Promoting the cultivation of the virtue of ash is exhorting people to consider what they’d do on the worst day of their lives, imagining the worst that could happen and trying to bend their mind to be someone who could handle that and keep going. It conveys a message that doing the best you can to make the future be bright is not enough; you should also be preparing for much darker futures where all your current plans lie in ruins, and to make those brighter. This hits at the anxious by raising the implicit standard for “doing enough” even higher. It also hits at the depressed by explicitly encouraging making highly depressing, dark outcomes mentally available.  Since the community being given this advice is already prone to anxiety, depression, and scrupulosity, this is a Bad Thing and should not be done without a clear reason to think the benefits are large. This year’s organizers may think those benefits are large, but if so I disagree.

Continue reading

I Have Seen the Tops of Clouds [Adapted]

I gave this speech in 2016 at the Bay Winter Solstice, as one of the speeches of darkness. The original is by Quinn Norton; this was revised to be shorter, to sound less like I had personally delivered all the revelations and experienced the anecdotes, and to focus less on concerns Ms. Norton has which I do not share. I was and remain proud of this edited version, and decided to make it publicly readable. If you intend to read as a speech to an audience, I suggest using my adaptation. For any other purpose, use hers. I believe only Ms. Norton herself has the right to use the work or any derivative for any commercial purpose, so do not do so or ask me for permission to do so.

I wake up in the middle of the night sometimes. I peer around my room scared and stressed, like all the things I can contain during the day break loose in dreams I can’t remember, the echoes of all these forgotten nightmares roaming around my body. Sometimes I want to cry, or curl up, or scream. I stare into the corners of my room. I try to fall back to sleep, even though I don’t want to go back to whatever sent me here.

It’s not a coincidence that I’m told I’m depressing. I think about depressing things.
I try to face the worst things about humanity and our situation. It started with how the oceans are dying, but since then moved on to genocide, imprisonment, the history of labor exploitation, computer security and mass surveillance, racism, and technological apocalypse.
I’m fun at parties.

It may be that our ticket was punched before we ever got started. While we’re cutting our time on earth shorter, it might be that our species was never going to make it past the end of the womb of our ice-age birth.
I explained this to a friend, about how fragile an organism we are, and how the ice ages cycle. She laughed. She was used to this strange form of hope.

“You have to choose hope, or just jump out a window,” someone said, a person who’d been accused of techno-utopianism. They were walking along the California coast at sunset, talking about all the ways our technological lives could go wrong, and the many ways they are going wrong.
They weren’t utopian, it turned out. They’d thought of the worst long before their detractors had. They’d decided to try to head it off, instead of jumping out a window.

We are diseased and angry and we kill each other and ourselves and all the world. I try to look at this, and my own part in it. Sometimes it’s overwhelming. I feel so powerless trying to comprehend all the terrible things we face, much less get past them into the future with our humanity and our inconceivably beautiful little blue-green planet preserved.

Looking at the ways we break the world, think of Tolstoy’s admonition that if we cannot give up the ills of our lives, then we should declare them, face them, put them on our flags. We can tell the world about the edge of our strength, ability, and virtue. We can share the failure honestly. This is good, and this helps, but it doesn’t bring back the vanished creatures and dying earth, and it doesn’t stop the relentless human cruelty.

There are nights full of invective and hate and days I can only see the flaws in our world, and feel my own flaws and my own fear from within.
And there is so much fear.
The land will drown. The seas could turn acid and burn us from above while starving us from within. At any moment we could still be consumed by nuclear fire, an accidental holdover from the Cold War we’ve failed to wrap up, like a binge drinker or a gambling addict who gets sober, but can’t face the past, and lets it fester.

All these grown-up monsters for my grown-up mind, they are there in the nights I wake up terrified and taunted by death. When I feel so small and broken, when despair and terror take me, I have a secret tool, a talisman against the night. I don’t use it too often so that it doesn’t lose its power. I learned it on airplanes, which are strange and thrilling and full of fear and boredom and discomfort. When I am very frightened, I look out the window and say, very quietly:

I have seen the tops of clouds

And I have.

In all the history of humanity, I am one of the few that has seen the tops of clouds. Many would have died to do so, and some did.
I have seen them many times.
I have seen the Earth from space, and spun it around like a god to see what’s on the other side. We are the only consciousness we’ve ever found that has looked deep into the infinite dark, and instead of dark, we saw galaxies. Suns and worlds without number. We have looked into our world and found atoms, atomic forces, systems that dance to the glorious music of the universe.
We have seen actual wonders that verge on the ineffable. We have coined a word for the ineffable. We have coined thousands of words for the ineffable. In our pain we find a kind of magic, in our worst and meanest specimens we find the flesh of a common human story. We are red with it.

I know mysteries that great philosophers would have died for, just to have them whispered in their dying ears. I can look them up on my smartphone.
I live in the middle of miracles, conceptions and magics easily worth many lifetimes to learn, from which I can pick and choose. I have wisdom and knowledge poured around me like a river, more than I could learn in a thousand lifetimes, and I am still alive.

It is good that I am alive, it is good that we are alive. Even if we kill ourselves off with nuclear fire, or gray goo, or drown ourselves in stinking acid oceans, it is good that we have lived, that we did all of this, and that we grew into what we are, and learned to dream of what we could be.
Perhaps we will soon die, but we will die having gone so far above our primordial ponds and primate forests that we saw the tops of clouds.

It is good that in the body of this weak and tender African animal a piece of the universe has gazed upon itself, that this tiny appendage of existence looked on everything its eyes and tools could drink in and experienced the most pure of wonder, the most terrible of awe. It is worth it, all of it, to even for a moment be the universe gazing upon itself. We reached so far above our biological fate that we spoke love to life, all life, and to its dark universal womb.

That takes away the fear for me. Not all of it, but enough so that I can hug my partner and fall asleep, to dream dreams of what we’ll do next, of how we’ll live this hope.

I can get past the horrible things we face. I can acknowledge the boring and unpleasant truths along the way. I can take up Tolstoy’s charge, and dream of a healing world where my descendants and their descendants will see wonders that I cannot now conceive.

We have seen the tops of clouds.

Social Modeling Recursion (Excerpt)

This is quoted from an explanation by Lahwran (blog), part of a larger post on LessWrong, and sourced from an original claim and example by Andrew Critch. To my knowledge, Critch has never posted it online. I found myself wanting to reference this divorced from the remainder of the post, so I have reproduced it here. None of these words are my own. (If in the distant future this is preserved while the originals are not, then my apologies, I feel the same way about several ancient Greek philosophers, but at least I’ve cleared up that I haven’t edited it.)


I found the claim that humans regularly social-model 5+ levels deep hard to believe at first, but Critch had an example to back it up, which I attempt to recreate here.

Fair warning, it’s a somewhat complicated example to follow, unless you imagine yourself actually there. I only share it for the purpose of arguing that this sort of thing actually can happen; if you can’t follow it, then it’s possible the point stands without it. I had to invent notation in order to make sure I got the example right, and I’m still not sure I did.

(I’m sorry this is sort of contrived. Making these examples fully natural is really really hard.)

  • You’re back in your teens, and friends with Kris and Gary. You hang out frequently and have a lot of goofy inside jokes and banter.
  • Tonight, Gary’s mom has invited you and Kris over for dinner.
  • You get to Gary’s house several hours early, but he’s still working on homework. You go upstairs and borrow his bed for a nap.
  • Later, you’re awoken by the activity as Kris arrives, and Gary’s mom shouts a greeting from the other room: “Hey, Kris! Your hair smells bad.”. Kris responds with “Yours as well.” This goes back and forth, with Gary, Kris, and Gary’s mom fluidly exchanging insults as they chat. You’re surprised – you didn’t know Kris knew Gary’s mom.
  • Later, you go downstairs to say hi. Gary’s mom says “welcome to the land of the living!” and invites you all to sit and eat.
  • Partway through eating, Kris says “Gary, you look like a slob.”
  • You feel embarrassed in front of Gary’s mom, and say “Kris, don’t be an ass.”
  • You knew they had been bantering happily earlier. If you hadn’t had an audience, you’d have just chuckled and joined in. What happened here?

If you’d like, pause for a moment and see if you can figure it out.

You, Gary, and Kris all feel comfortable bantering around each other. Clearly, Gary and Kris feel comfortable around Gary’s mom, as well. But the reason you were uncomfortable is that you know Gary’s mom thought you were asleep when Kris got there, and you hadn’t known they were cool before, so as far as Gary’s mom knows, you think she thinks kris is just being an ass. So you respond to that.

Let me try saying that again. Here’s some notation for describing it:

  • X => Y: X correctly believes Y
  • X ~> Y: X incorrectly believes Y
  • X ?? Y: X does not know Y
  • X=Y=Z=...: X and Y and Z and … are comfortable bantering

And here’s an explanation in that notation:

  • Kris=You=Gary: Kris, You, and Gary are comfortable bantering.
  • Gary=Kris=Gary's mom: Gary, Kris, and Gary’s mom are comfortable bantering.
  • You => [gary=Gary's mom=kris]: You know they’re comfortable bantering.
  • Gary's mom ~> [You ?? [gary=Gary's mom=kris]]: Gary’s mom doesn’t know you know.
  • You => [Gary's mom ~> [You ?? [gary=Gary's mom=kris]]]: You know Gary’s mom doesn’t know you know they’re comfortable bantering.

And to you in the moment, this crazy recursion just feels like a bit of anxiety, fuzzyness, and an urge to call Kris out so Gary’s mom doesn’t think you’re ok with Kris being rude.

Now, this is a somewhat unusual example. It has to be set up just right in order to get such a deep recursion. The main character’s reaction is sort of unhealthy/fake – better would have been to clarify that you overheard them bantering earlier. As far as I can tell, the primary case where things get this hairy is when there’s uncertainty. But it does actually get this deep – this is a situation pretty similar to ones I’ve found myself in before.

There’s a key thing here: when things like this happen, you react nearly immediately. You don’t need to sit and ponder, you just immediately feel embarrassed for Kris, and react right away. Even though in order to figure out explicitly what you were worried about, you would have had to think about it four levels deep.

If you ask people about this, and it takes deep recursion to figure out what’s going on, I expect you will generally get confused non-answers, such as “I just had a feeling”. I also expect that when people give confused non-answers, it is almost always because of weird recursion things happening.

In Critch’s original lightning talk, he gave this as an argument that the human social skills module is the one that just automatically gets this right. I agree with that, but I want to add: I think that that module is the same one that evaluates people for trust and tracks their needs and generally deals with imagining other people.

Exported: Blind Goaltenders: Unproductive Disagreements

(Exported: Copying posts out from Lesserwrong, since I have totally lost confidence in it.)

If you’re worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger. More often, though, you’ll be discussing it with people who disagree, at least in part.

The question that inspired this post was “Why are some forms of disagreement so much more frustrating than others?” Why do some disagreements feel like talking to a brick wall, while others are far more productive?

My answer is that some interlocutors are ‘blind goaltenders’. They not only disagree about the importance of your problem, they don’t seem to understand what it is you’re worried about. For example, take AI Safety. I believe that it’s a serious problem, most likely the Most Important Problem, and likely to be catastrophic. I can argue about it with someone who’s read a fair chunk of LessWrong or Bostrom, and they may disagree, but they will understand. Their disagreement will probably have gears. This argument may not be productive, but it won’t be frustrating.

Or I could talk to someone who doesn’t understand the complexity of value thesis or orthogonality thesis. Their position may have plenty of nuances, but they are missing a key concept about our disagreement. This argument may be just as civil – or, given my friends in the rationalsphere, more civil – but it will be much more frustrating, because they are a blind goaltender with respect to AI safety. If I’m trying to convince them, for example, not to support an effort to create an AI via a massive RL model trained on a whole datacenter, they may take into account specific criticisms, but will not be blocking the thing I care about. They can’t see the problem I’m worried about, and so they’ll be about as effective in forestalling it as a blind goalie.

Things this does not mean

  • Blind goaltenders are not always wrong. Lifelong atheists are often blind goaltenders with respect to questions of sin, faith, or other religiously-motivated behavior.
  • Blind goaltenders are not impossible to educate. Most people who understand your pet issue now were blind about it in the past, including you.
  • Blind goaltenders are not stupid. Much of the problem in AI safety is that there are a great deal of smart people working in ML who are nonetheless blind goaltenders.
  • Goaltenders who cease to be blind will not always agree with you.

Things this does mean

Part of why AI safety is such a messy fight is that, given the massive impact if the premises are true, it’s rare to understand the premises, see all the metaphorical soccer balls flying at you, and still disagree. Or at least, that’s how it seems from the perspective of someone who believes that AI safety is critical. (Certainly most people who disagree are missing critical premises.) This makes it very tempting to characterize people who are well-informed but disagree, such as non-AI EAs, as being blind to some aspect. (Tangentially, a shout-out to Paul Christiano, who I have strong disagreements with in this area but who definitely sees the problems.)

This idea can reconcile two contrasting narratives of the LessWrong community. The first is that it’s founded on one guy’s ideas and everyone believes his weird ideas. The second is that anyone you ask has a long list of their points of disagreement with Eliezer. I would replace them with the idea that LessWrong established a community which understood and could see some core premises; that AI is hard, that the world is mad, that nihil supernum. People in our community disagree, or draw different conclusions, but they understand enough of the implications of those premises to share a foundation.

This relates strongly to the intellectual turing test, and its differences with steelmanning. Someone who can pass the ITT for your position has demonstrated that they understand your position and why you hold it, and therefore are not blind to your premises. Someone who is a blind goaltender can do their best to steelman you, even with honest intentions, but they will not succeed at interpreting you charitably. The ITT is both a diagnostic for blindness and an attempt to cure it; steelmanning is merely a more lossy diagnostic.

Exported: Personal Model of Social Energy

(Exported: Copying posts out from Lesserwrong, since I have totally lost confidence in it.)

Epistemic Status: This is a model I have derived from my own experience, with a fair amount of very noisy data to back it up. It may not generalize to anyone else. However, it seems like a framework that might be useful, so I’m sharing it here.

The excessively simple model of social energy is the introvert/extravert dichotomy. Introverts lose energy from social situations, extraverts gain energy. This is then elaborated into the I/E scale, where the sign of your energy change in social situations is mapped to an integer. This is clearly more descriptive of reality, but as many have pointed out, still imperfect.

I find that for me there are separate sets of factors that determine energy gain and energy loss.

For energy gain, it is a positive-slope, negative-curvature function of the number of people present. There is energy in a room, and I do pick up some of it. (Something like sqrt(n), or possibly 10-10/n

For energy loss, it is a function of how much I trust the person in the room I trust least; f(min(trust(p) for p in room)). This grows much faster than the number of people present. Trust also seems to be a function of my pre-existing mood (that part I expect won’t generalize).

Naively, I would have expected this to be a weighted average of my trust of people in the room, where five people I trust very much and one I trust very little would feel very different from five I trust somewhat and one I trust very little. I have difficulty arranging that test, but preliminary data suggests that expectation was wrong; one person who I cannot relax around spoils things thoroughly. (‘Trust’, here, is very much a System 1 thing; feeling safe/open around someone, rather than feeling/thinking that they are trustworthy/upstanding/honest.)

The predictions made by this model are that you should choose your social gatherings carefully, even if extroverted, as the benefits of size can be wiped out by one or two poorly-chosen guests.

More broadly, I think that considering gain and loss separately will clarify the feelings toward socializing of many self-identified introverts. Since it seems quite plausible that ‘true introverts’ who never get energy from social interaction are not actually a thing, I expect this would help improve the day to day lives of many.