How I Use Beeminder

I am bad at using productivity systems. I know this because I’ve tried a bunch of them, and they almost all last somewhere between a week and four months before I drop them entirely. I’ve tried Habitica, Complice, a simple daily “what can I do tomorrow” in Google Keep, a written journal… All of them work for a little while, but only that.

Beeminder has stuck. I now have several intermittent goals set up in it that I’ve regularly accomplishing. This is how I use it.


Beeminder is a goal-tracking app. You set a target (“at least X <entries> per week” or special settings for weight loss/gain. It’s more flexible if you pay for a subscription.) and a starting fine collected (by default $5). Then you enter data; it tracks your overall progress, and if you slip below the rate you set, it bills you for the fine and then raises it.

When I started it, I was in a slump, and used it for two things: getting out job applications and remembering to take care of basic hygiene. It sends reminders at an increasing rate if you forget, so it helped a lot with remembering to take showers before it got late enough that I’d wake up my house to do it, and to brush my teeth regularly. And since I was contractually obligated to try hard to find a job, having finished App Academy not long before, a regular reminder that also helped me track when and where was very useful. These were all very frequent goals; my minimum was two applications per day, brushing my teeth twice a day, and showering at least 3x/week. This was pretty good at keeping me on track, but didn’t ever use much less willpower than it did at first.

Currently, I use it somewhat differently. I still have the brushing-my-teeth goal, but the only time it’s been at risk was a period where I broke my brush and didn’t get a new one for several days. It’s now the only daily or near-daily goal I have; its function is mainly to keep me looking at Beeminder regularly. As I vaguely remember from a certain game designer repeating it many, many times, structured daily activities are key to building a routine. I seem to be less susceptible to routine than most people, but it still helps.

With the regular check-in goal in place, I can hang longer-term goals on it. Right now, that’s getting back into playing board games regularly and continuing my quest to learn more recipes. Both of these are things that I am happier and better-motivated when I do, but forget to do from day to day. In writing this post, I also decided to add a habit of clearing out my Anki decks more regularly, since I’ve gotten out of the habit of using those.

This way isn’t the only way, but it’s an effective one, and distinctly different from how the Beehivers themselves do. So if their ways sound alien but this seems appealing, consider giving it a shot.

Short Thought: Testable Predictions are Useful Noise

A housemate of mine thinks that theories making testable predictions is unimportant, relative to how simple they are and what not-currently-testable predictions they make.

There’s some merit to this. There are testable theories that are bad/useless (luminiferous aether), and good/useful theories that aren’t really testable (the many-worlds interpretation of quantum physics). Goodness and testableness aren’t uncorrelated, but by rejecting untestable theories out of hand you are going to be excluding some useful and possibly even correct theories. If you have a compelling reason to use a theory and it matches well with past observations, your understanding may be better if you adopt it rather than set it aside to look for a testable one.

But there is a reason to keep the testable-prediction criterion anyway: it keeps you out of local optima. By the nature of untestability, a theory that does not make testable predictions, no matter how good, will never naturally improve. You may switch, if another theory looks even more compelling, but you will get no signal telling you that your current theory is not good enough.

By contrasts, even a weak theory with testable predictions is unstable. It provides means by which it can be shown wrong, with your search pushed out of the stable divot of “this theory works well” and back to searching. If your tests are useful, they will push you along a gradient toward a better area of theory-space to look in, but at the least you will know you need to be looking.

The upshot is this: even if you have a theory that looks very good, in the long run it is probably better to operate with a theory that looks less good but has testable predictions. The good but stable theory will probably outlive its welcome, while the testable but weak theory will tell you to move on when your data and new experiences pass it. Like a machine learner adding random noise to avoid being stuck, testable predictions are signals that ensure you will explore the possibilities.

Benignness is Bottomless

If you are not interested in AI Safety, this may bore you. If you consider your sense of mental self fragile, this may damage it. This is basically a callout post of Paul Christiano for being ‘not paranoid enough’. Warnings end.

I find ALBA and Benign Model-Free AI hopelessly optimistic. My objection has several parts, but the crux starts very early in the description:

Given a benign agent H, reward learning allows us to construct a reward function r that can be used to train a weaker benign agent A. If our training process is robust, the resulting agent A will remain benign off of the training distribution (though it may be incompetent off of the training distribution).

Specifically, I claim that no agent H yet exists, and furthermore that if you had an agent H you would already have solved most of value alignment. This is fairly bold, but at least the first clause I am quite confident in.

Obviously the H is intended to stand for Human, and smuggles in the assumption that an (educated, intelligent, careful) human is benign. I can demonstrate this to be false via thought experiment.

Experiment 1: Take a human (Sam). Make a perfect uploaded copy (Sim). Run Sim very fast for a very long time in isolation, working on some problem.

Sim will undergo value drift. Some kinds of value drift are self-reinforcing, so Sim could drift arbitrarily far within the bounds of what a human mind could in theory value. Given that Sim is run long enough, pseudorandom value drift will eventually hit one of these patches and drift to an arbitrary direction an arbitrarily large distance.
It seems obvious from this example that Sim is eventually malign.

Experiment 2: Make another perfect copy of Sam (Som), and hold it “asleep”, unchanging and ready to be copied further without changes. Then repeat this process indefinitely: Make a copy of Som (Sem) and give him short written instructions, written by Sam or anyone else, and run Sem for one hour. By the end of the hour, have some set of instructions and state written in the same format. Shut off Sem at the end of the hour and take the written instructions to pass to the next instance, which will be copied off the original Som. (If there is a problem and a Sem does not create an instruction set, start from the beginning with the original instructions; deterministic loops are a potential problem but unimportant for purposes of this argument.)

Again, this can result in significant drift. Assume for a moment that this process could produce arbitrary plain text input to be read by a new Sem. Among the space of plain text inputs could exist a tailored, utterly convincing argument why the one true good in the universe is the construction of paperclips; one which exploits human fallibility, the fallibilities of Sam in particular, biases likely to be present in Som because he is a stored copy, and biases likely to be peculiar to a short-lived Sem that knows it will be shut down within one hour subjective. This could cause significant value drift even in short timeboxes, and once it began could be self-reinforcing just as easily as the problems with Sim.
Getting to the “golden master key” argument for any position, starting from a sane and normal starting point, is obviously quite hard. Not impossible, though, and while the difficulty of hitting any one master key argument is high, there is a very large set of potential “locks”, any of which has the same problem. If we ran Sem loops for an arbitrary amount of time, Sem will eventually fall into a lock and become malign.

Experiment 3: Instead of just Sam, use a number of people, put in groups and recombining regularly from different parts of a massively parallel system of simulations. Like Sem, it is using entirely plain-text I/O and is timeboxed to one hour per session. Call the Som-instance in one of these groups Sum, who works with Diffy, Prada, Facton, and so on.

Now rather than drifting to a lock which is a value-distorting plain text input for a Sem, we need one for the entire group, which must be able to propagate to one via reading and enough of the rest via persuasion. This is clearly a harder problem, but there is also more attack surface; only one of the participants in the group, perhaps the most charismatic, needs to propagate the self-reinforcing state. It can also drift faster, once motivated, with more brainpower that can be directed toward it. On balance, it seems likely to be safer for much longer, but how much? Exponentially? Quadratically?

What I am conveying here is that we are patching holes in the basic framework, and the downside risks are playing the game of Nearest Unblocked Strategy. Relying on a human is not benign; humans seem to be benign only because they are, in the environment we intuitively evaluate them in, confined to a very normal set of possible input states and stimuli. An agent which is benign only as long as it is never exposed to an edge case is malign, and examples like these convince me thoroughly that a human subjected to extreme circumstances is malign in the same sense that the universal prior is malign.

This, then, is my point: we have no examples of benign agents, we do not have enough diversity of environments to observe agents in to realistically conclude that an agent is benign, and so we have nowhere a hierarchy of benign-ness can bottom out. The first benign agent will be a Friendly AI – not necessarily particularly capable – and any approach predicated on enhancing a benign agent to higher capability to generate an FAI is in some sense affirming the consequent.

Holidaying: An Update

As described in Points Deepen Valence, I’ve been contemplating and experimenting with holiday design. Here’s how it’s going:

I ran a Day of Warmth at a friend’s apartment (on the weekend after Valentine’s Day), and it went fairly well.
Good points: a ritualistic quasi-silence was very powerful, and could probably go longer. The simple notion of it being a holiday, rather than a party, does something to intensify the experience. Physical closeness and sharing the taste and smell of food were, as hoped, good emotional anchors. Instinctual reactions about what will be well-received, based on initial gut impression, seem to be pretty accurate.
Bad points: a loosely planned event is not immune, or even resistant, to the old adage that no plan survives contact with the enemy (or in this case audience and participants). I tried to have a small handful of anchors and improvise within them, since the event was small, but without planning problems came up faster and more wide-ranging than I expected. The anchors went off alright, but not as planned; everything between them required more constant thought than desired. Breaking bread, without clear parameters on the bread, did not work well physically. And the close-knit atmosphere of comfort desired was not actually compatible with the intended purpose of deepening shallow friendships.
(A longer-form postmortem is here.)

My initial idea for the Vernal Equinox was a mental spring cleaning, Tarski Day. I haven’t been able to find buy-in to help me get it together, and this month’s weekends are actually very crowded already, so I won’t be doing that. Instead, I’ve been researching other ritual and holiday designs to crib off, and looking for events to observe. One group I’ve been looking at is the Atheopagans, who use the “traditional” pagan framework of the wheel of the year without any spiritual beliefs underlying it. I don’t empathize much with the ‘respect for the earth’ thing, personally, but cribbing off their notes (and how that blogger, specifically, modified holidays for the California climate) is valuable data. He also wrote this document on designing rituals, including some points I agree with and can take advice to include, and some I dislike and consider to carry the downsides of religious practice, to avoid.

There are also the connected “Humanistic Pagans”, and a description of the physical significance of the eight point year (Solstices, Equinoxes, Thermstices and Equitherms) here. It also includes some consequences of the interlocking light/dark and hot/cold cycles for what activities and celebrations are seasonally appropriate, which is food for thought.

I’m not sure where I’m going from here. After the Spring Equinox comes the Spring Equitherm, aka Beltane, which in many traditions and by the plenty/optimism vs. scarcity/pessimism axis, to be naturally a hedonistic holiday. I am not a hedonist by nature, so while I’m sure I could find friends who would be happy to have a ritualistic orgy and/or general bacchanalia, I’m not sure I’d want to attend, which somewhat defeats the personal purpose of learning holiday design. But I don’t want to leave a four-month gap in my feedback loop between now and the Summer Solstice. I suppose I’ll keep you posted.

Minimum Viable Concept

I got into an argument, and while I don’t think anyone changed their mind, I think I realized something about why our argumentative norms are so incompatible.

The people I was arguing with are academic philosophers. They like extensive, detailed exploration of a concept, tend to be very wordy, and cite heavily.

I am a rationalist, which is justly accused of being a new school of philosophy that includes as one of its tenets “philosophy is dumb”, and we do not have the same norms.

Here’s an example: (EDIT: After feedback that the quoted person did not agree with their paraphrased statement, I have replaced it with direct quotes.)

Me: I’d be interested in the one minute version of how you think the Sequence’s criticism of philosophy is wrong.
My interlocutor:
 There are several criticisms, if you link me to the one you want, I’ll write a thing up for you.
Me:
“Point me to a paper” is one of the frustrating things about trying to argue with [philosophers]. Particularly after [I asked] for the short version.
If you don’t have a response to the aggregate that’s concise, just say so; the response you gave instead comes off as a mix of sophistication signal and credentialist status grab, with a minor side of “This feels like dodging the question.”

Philosophers, on the other hand, seem to have a reaction to rationalist argument styles of “Go read these three books and then you’ll be entitled to an opinion.” More charitably, they don’t think someone is taking discussion of a topic seriously unless they have spent significant effort engaging with primary sources that are discussed frequently in the literature on that topic. Which, by and large, rationalists are loath to do.

The academic mindset, I think, grows out of how they learned the subject. They read a lot of prior work, their own ideas evolved along with the things they’d discussed and written papers about. A lot of work is put into learning to model the thought processes of previous writers, rather than just to learn their ideas. Textbooks are rare, primary sources common. Working in an atmosphere of people who all learned this way would tend to give a baseline assumption that this is how one becomes capable of serious thought on the subject.

(Added note: It seems to be the case that modern analytic philosophy has moved away from that style of learning at most schools. All the effects of this style still seem to predict the observed data, though.)

The rationalist mindset grows out of the Silicon Valley mindset. They have the “minimum viable product”, we have the “minimum viable concept”. Move fast and break assumptions. Test your ideas against other people early and often, go into detail only where it’s called for by the response to your idea, break things up into many small pieces and build on each other. If you want to get a library of common ideas for a subject, read a textbook and go from there.

With this mindset, it’s a waste of time to read a long book just to get a few ideas and maybe an idea about how the author generated them; you could instead take half an idea, smash it against some adversarial thinking, and repeat that three or four times, getting several whole ideas, pushing them into their better forms, and discarding the three or four that didn’t hold up when you tested them. Find techniques that work and, if you can put them into words, give them to someone else and see if it works for them as it did for you.

So academics see us as dilettantes who don’t engage with prior art, are ignorant, and make old mistakes; and we see them as stick-in-the-muds who aren’t iterating, wasting motion on dead ends without anyone to tell them they’re lost and slowing down any attempt at collaboration.

(I don’t think I’ve changed my mind about what I prefer, but I hope I’ve passed an ideological/epistemological Turing test that lets people make up their minds which is better.)

Self-Reifying Boundaries

In the words of Scott Alexander:

Chronology is a harsh master. You read three totally unrelated things at the same time and they start seeming like obviously connected blind-man-and-elephant style groping at different aspects of the same fiendishly-hard-to-express point.

In my case this was less “read three totally unrelated things” and more “read one thing, then have current events look suspiciously related”. I have been working my way through Thomas Schelling’s “Strategy of Conflict”, which made precise the concepts we now call “Schelling points” and “Schelling fences”, among others. He was focused on the psychological game theory of positive-sum bargaining, particularly in the context of nuclear war.

And then some black bloc antifa asshole punched a white supremacist.

Which I’m against. Do not punch Nazis. No, not even if they’re wearing spider armbands and shouting Heil Hitler. Imminent self defense only. “Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.”

But why?

Why is free speech protected? Other good tools, it can be pointed out, are also usable for bad purposes. Abusers “set boundaries” to maintain their control, but boundary-setting is healthy in other contexts. We do not have a “right to set boundaries” that protects the misuse by abusers.

The first reason is the marketplace of ideas, which Scott defended more eloquently than I’m likely to manage. A good reply to a bad argument, or a morally terrible ideology, is one that addresses the substance, not one that silences it. Say there are only clueless idiots being wrong, and enlightened philosophers being right (or at least less wrong). 1000 clueless idiots can silence 10 enlightened philosophers just as well as 1000 enlightened philosophers could silence 10 clueless idiots. Or you could argue the substance; even if there are 1000 idiots arguing, the philosophers are probably going to win this one.

And because that’s true, we should be very skeptical of attempts to shut down speech. If you need to silence it, that suggests you don’t think you can beat it on the merits, while every day it sits out in the marketplace of ideas and doesn’t catch on is another snub, showing that their ideas are not worthwhile.


The second reason is where we get back to Schelling. He spends a couple chapters and spills a bunch of ink about points for implicit cooperation in cooperative games with no communication. The classic example is meeting someone in New York City, but the purest one is this:

Pick a positive number. If you pick the same as your partner, you both win.

The correct answer is 1. Not because of anything inherent, but because human minds tend to settle on it; if you line up all the integers, it comes first. Similarly, if two parachuters land on a map and don’t know each other’s locations, they should meet at whichever feature is most unique. On this map, meet at the bridge:
schellingmapIf there is only one building, and two bridges, meet at the building. And if right before you jumped, one of you said “if I got lost, I’d climb the highest hill around and look for my buddy”, then you go to the highest hill around.

Critically – and this is where Schelling gets to his real subject – you should climb that hill even if it’s grotesquely unpleasant for you. It wasn’t the obvious place to meet, but by the act of mentioning it your buddy has made it so; now it is. The act of mentioning that something might be the obvious place to coordinate, if communication stops there, makes it the obvious place to coordinate. Make a stupid assumption out loud at a time when shared  context is scarce and no one can contradict you, and you reify your stupid assumption into consensus quasitruth, because everyone knows that everyone knows about it, and now you have a shared premise to reason about where you go from there.

This is culturally and contextually determined. If you have to coordinate on a number from the list “three eight ten ninety-seven seventy-three”, you’ll probably pick ten, but if you counted in base 8, you’d probably pick eight instead. And these natural coordination points determine points of reasonable compromise. A car salesman haggling doesn’t say “I will accept no less than $5173.92 for this one”, because no one would believe it. “I will accept no less than $5200”, though, we will believe (as much as we’ll ever believe a car salesman).


At the time he was writing, we had conventional explosives more powerful than any nukes that were public knowledge. We used them. Nukes stayed off the table anyway, not because they were different but because they felt different. It was an obvious line, and obvious to everyone that it was obvious to everyone. And so “no nukes” became one of the rules of limited war in a way that “no nukes more destructive than our best conventional bombs” couldn’t have. The perception of them as a difference in kind reified itself, creating a distinct legal status purely because of their distinct subjective perception.

The same is true of free speech. There are reasons to think that free speech is more important. (See reason one.) But even if those reasons don’t cut it, everyone knows about them, and since the Enlightenment it has been treated as especially important. It’s more vivid in the USA, where we elevated it to the second right specifically protected in the Bill of Rights, but even in Europe, where its status is lower, everyone understands that protecting freedom of speech is special, even where they allow exceptions. Even if it isn’t, in an objective ethical calculus, actually worth special protection,  we treat it as a bright line which only tyrants cross, and bending that bright line makes you appear legitimately tyrannical, whether you do it with the law, with violence, or with social warfare and campaigns of ostracism.

So.

Don’t.

On Pointy Hair

For anyone who’s read Dilbert, or knew someone who did, there’s the general concept of a “pointy-haired boss” (sometimes also called a “bogon”, or just “a suit”). Pointy-haired bosses are not just managers, but managers who seem to operate on a level totally disconnected from reality. Recently I read a post that reminded me of them, and, relatedly, why traditional interviews for software companies are unusually stressful.

Ben Hoffman’s The Quaker and the Parselmouth uses a metaphor of “Quakers”, who never lie and treat promises very solemnly, and “Parselmouths”, who lie freely, but never to each other.

Are there advantages of being a Quaker over being a Parselmouth? I’ve already argued that in particular cases there can be advantages in being trusted by the untrustworthy. A Quaker bank might not be happy to lend money to [people who lie freely], but it should be happy to have them as depositors.

But I don’t automatically get credited for my attempts to say what I mean and no more. If the people around me have no idea that this might even be a thing, then what incentive do I have to keep doing it? And yet, I don’t find myself smoothly adjusting to my circumstances – I find myself awkwardly trying to say only and exactly what I mean, even in circumstances when people are expected to exaggerate, so I’ll be taken to mean much less than I do. I suspect that it’s not quite possible for humans to completely fine-tune their honesty case by case. I suspect that it’s hard to learn that words have meanings here but not there, that justice is a virtue in this place but a vice in that one.

Instead, I suspect that for the health of the souls of those who are dispositionally inclined towards treating words not as mere reports of current inclinations, but as things designed to stand enduringly, monumental inscriptions meant to be true long after the time in which they were written passes away, these people need an environment where this is in fact globally the case.

There are purposes where being neither of the two is a good decision; politics, whether national or office, is one, and acting is another; let’s call people who use this style Actors. All are – most likely – durable inclinations. If you learn to keep your words close to the truth to avoid dangerous miscommunications about precise topics, it is harder to let your words move fluidly to persuade. If you learn to adapt your words to the situations at hand for effectiveness, and not worry too much about how long or how precisely they describe the truth as you see it, it is hard to switch to careful precision even when it’s critically important. It’s most likely possible to learn the “Parselmouth” style, with difficulty; the Marranos of Spain, or similar groups who must scrupulously present one face in private and another in public, may manage it. But it’s certainly harder. (There’s at least one more approach worth gesturing at, where words are treated as distractions and actions and physical presence are the accepted signs of truth, but I don’t know it personally.)

But communication between the two styles is difficult; with the very different approach to words and the truth, for many purposes you need as much translation effort as from two different languages. (Lullaby words create a similar problem.) Generally with a good working relationship and experience, most people in mixed environments of Quaker-ishness and Actor-ishness learn this translation. But it’s unnatural and frustrating, and so the Quakers call the managers pointy-haired bosses and the Actors call the engineers… actually, I’m not sure. “Autistic pedants”?

And if you’re a software engineer, this tension gets acute during the traditional-format on-site interview. There are technical questions, where precision and holding tight to the truth is critical and the ideal mindset is Quaker. But there are also ‘soft’ questions, whose goal is to see a frankly unrealistic level of enthusiasm for this specific company, description of your work on past projects framed to maximize your contribution and the scope, and generally presenting an image of yourself and your work that conforms to expectations. This is a very Actor-y mode, and frequently you’ll be asked to transition between the two in the course of answering a single question. There may be people for whom this is natural, but they’re a small segment of the population.

In short: interviews suck, and communication across the boundaries of truth norms suck hard.