So, that didn’t work at all

Anyone who has been following my blog will note that I changed URLs to be a ritual-focused blog and then went silent, and now I’m back at the old URL of I was pretty enthusiastic, so what changed?

Short version? My P(Superintelligence before 2070) skyrocketed.

Given current ML progress, it no longer looks plausible that anyone in the current rationalist community who does not already have adult children will have adult grandchildren on this side of the Singularity. We will most likely have one more generation who reach adulthood; we will not have two. Either we’ll be dead, or we will cross the point beyond which the future cannot be predicted, which is, remember, what ‘singularity’ means.

Maintaining a culture across one generation is feasible without a coherent strategy. More likely to succeed with a strategy, certainly, but feasible without one. Only for two generations (or more) is it desperately important to work out rituals, holidays, social practices, etc., that preserve the values and good qualities of the community and carry them into the future.

It’s still interesting and I may try things. But it’s no longer my contribution to the community and the shared project. It’s just a fun hobby.

High Summer

We are up here, blowing in the cold wind on the warmest day of the year, by our own hands.

Abolishing disease is great, something to celebrate cerebrally. But today, we’re speaking to the lizard brain: look around, make a call, and leave a message:

We build mountains!

We don’t always live up to our potential. But if you ever doubt our potential, remember this. We are the ones who build mountains.

-excerpts from my plans for a speech for High Summer

Welcome back to my blog, now moved to a dedicated site for a renewed focus on holiday crafting and secular, individualist, rationalist ritual.

My next plan is a slightly refined older design I never tested out: High Summer. In line with the wheel of the year schema, this belongs in mid-August, when Celts had Luggasnadh. Its message is the potential of humanity, as we have already demonstrated it, and a reminder of what we are capable of in the future. Its expected emotion is awe. Its medium for those is simple: feel the wind on your face from a tall, open-air place, one that is entirely man-made. Notice and remember, viscerally, that we have built mountains.

I am actively looking for a location in the middle Bay (SF/Oakland, presumably) to host this. The SF POPOS law says that all the rooftop decks should be free to the public for this kind of purpose, but it’s not enforced at all so I’ll probably have to pay money for it. I probably don’t need more than a couple other people to run the event, but a couple would probably be very helpful.

Ritual of Kith

Not all rituals should be holidays. Some should commemorate important one-time events. Marriage is an existing ritual, and one which we seem to have worked out a pretty decent secular variant of. Historically, many societies also had one for adoption; this is not something we have a worked example of in modern society.

These are also both special cases of a more general schema: ritual for acknowledging someone as family. Historically, this was very important, since a lot of social safety net was built on the family (and the rest was usually built on the parish church). Rituals like this are very useful, because it’s not enough to recognize someone as family, you also have to establish common knowledge in your community that you have recognized them as family. It is/was expected that family came before the rest of the community, unless your community member’s needs were much, much greater. It was a explanation and excuse for what would otherwise be considered unfair partiality, and if that common knowledge was in place, it avoided the appearance that the unfair partiality was motivated by personal grudges or vitriol.

So this seems like an excellent ritual we should have. However: copying the idea across naively, it fails as individualist ritual. Acknowledging family tends to move the local culture towards thinking of families as units, and that is unacceptably anti-individualist. What we need is something related, but different. Not a ritual marking someone as kin… but, perhaps, one marking someone as kith. In the sense suggested to modern ears by the phrase “kith and kin”; this sounds like it means the people who are as important as kin/family, but who are not family. (This is not what the word meant historically, but absent a better term, I don’t see any reason to let that get in the way of a good name.)

This overlaps substantially with polyamory; many people with multiple partners will have something more family-like with some, especially if there are kids, but remain commensurably close to their other partners. But, and this is very important, that is not transitive. If I am dating Alice and have been for years, and Alice is also dating Bob and has been for years, this does not mean I am kith with Bob. Generally it means I can at least get along with Bob at the kitchen table, or else there would be strain on Alice’s relationships, but who Alice considers her ‘inner circle’ is entirely her own business. It becomes my business only to the extent that this creates conflict, conflict that either I or Alice finds more intolerable than the prospect of ending our close relationship.

I do not have a design for this ritual, yet. But I do have other desiderata:

  • It has to be costly in some sense. Someone who declares five hundred people part of their kith has devalued the label to the point of uselessness, and damaged the viability of the label for others. A tentative way to do this is for the ritual to require the physical presence of your existing kith, or at least most of them; this both scales superlinearly in difficulty, and makes it very obvious to all observers, including your own, that this is getting a little out of hand.
  • It has to be public in some sense. At minimum, anyone in your kith’s kith should know, and ideally it should be something that polite gossip spreads. This might be possible to fudge with a website that declares a register of kith declarations. (This has some aesthetic similarities to Reciprocity.IO’s role as a conditional-disclosure hangout/dating interest accumulator. Reciprocity is not online at time of writing but a replacement is currently semi-public.) Less-fudged would be to allow (but not require) the broader community to be in the audience for the ritual.
  • It should be symmetrical. If there is an ostensibly-permissible asymmetric variant, but also an official or unofficial symmetric one, the symmetric one will be commonly-used, and social pressure will almost certainly push heavily for the asymmetric variant to go unused as ‘rude’. If this is true, it would be better for the ‘de jure‘ variant which is socially-banned to not exist, because it is largely a lie. And that’s probably true, and asymmetric variant a minority demand, so let’s skip the intermediate steps.
  • It must be reversible, but this has to be even more costly. If we provide pseudo-marriage, we must provide pseudo-divorce; covenant marriage is extremely anti-individualist and therefore unacceptable. No-fault dekithing is preferable, but it might be acceptable to settle for a model where an offense is needed as reason, though the bar must not be as high as it was for divorce before no-fault divorce became the norm.

Variants we might want to support:

  • Temporary declarations. Several people in the rationalist community have made use of handfasting, commitments to behave as married for a specified duration (typically a year and a day), and in some cases repeated that some number of times and then formalizing their relationship as a marriage. These have been, maybe counterintuitively, among the marriage-like relationships with the best track record for sticking together long-term, so there seems to be something valuable there.
  • Mutual declarations by groups of more than two people. It may make sense to have these be formally designated ‘kin’ relationships, since it is a mutual agreement by a group of people to treat each other as an inseparable unit. For an anonymized real example, two married bioparents and their roommate, who has been coparenting the married couple’s biokid. They determine that the kid would be traumatized as much by losing the coparent as either bioparent and therefore formalize this as a family unit. This took the form of a public statement on social media making clear their wishes in the event of something happening to the bioparents in regards to custody of the kid, which worked pretty well. But formalizing a mechanism for this seems desirable, and the informal method doesn’t work if there isn’t a kid.

Unfinished Thought: Walking Back Opposition to Authentic Relating et al.

Epistemic Status: Better thought-through than previous things I’d written on the subject. Still very fuzzy and iffy. Being kicked out of my drafts folder after about 3 years.

As anyone who reads this blog is probably aware, I have had some vehement, public disagreements about community norms, Circling(/Authentic Relating) and how much to distrust status instincts. I had an extended conversation about that whole cluster with Divia Eden at a dinner party some time ago, and concluded that at least a substantial part of my motivation was poorly-grounded, causing me to significantly underestimate the costs of my preferred approach.

The crucial point is that if you ditch the goal of these projects as “useless if not actively bad“, as I had done, you should expect to get gut-level motivation weakening and falling out of step with intellectual motivation. Particularly, going too far in that direction is like to produce very intelligent, dedicated, completely depressed and lumpen people. We have so many of those hanging around the Berkeley rationalist cluster that the specific subset who are transwomen acquired a collective name (together with a couple extra traits): “catgirls”.

A Testable Prediction has been made, in Accordance with The Law. Behold! The test came back positive. We do need something in this genre, or we will, directly, fall into this failure mode. However, I still think that most practitioners here are Not Paranoid Enough. More care is needed in how we align intellectual priorities with internal felt priorities and, ideally, with socially-shared priorities. This is necessary, but it is very, very easy to get wrong.

So we need something better planned. We need the Bardic Conspiracy


While our modern neoreactionaries didn’t originate their line of thought, and anti-enlightenment thought is approximately five minutes younger than the enlightenment, I think that ultimately, most of the problems they see in modern society stem don’t come from democracy or ‘demotism’; they came about because Germany built the Autobahn.

Naturally, there are a couple steps in this chain of logic. And it’s possible I’m misunderstanding NRX thought. But if you’ll bear with me for a moment:

It’s a common claim in NRX circles that we have lost social technology that was making our lives far better for a very long time, and that if we could replace them, we could get ‘more good stuff and less bad stuff’, generally accompanied by some variation on this graph:

graph of homicide rates, prison rates, and their sum; the sum rises sharply from 1960 to 1990
A graph of homicide rates + prison rates, originally from Steve Sailer.

To be fair, this version of the graph raises some questions about the usual explanation, and looks more like prison-unresponsive long-cycle variation in the crime rate than a breakdown of social technology. There are a lot of confounders, so I’m willing to basically accept the premise (somewhat for the sake of argument): we have a very high imprisonment rate now, just to get almost as good lack-of-crime as we used to get with much lower punishment, and the reason we had it then was better social tools for discouraging crime.

When did that break down? Well, roughly the 60s. In the wake of the pill? True, but also in the wake of the Interstate Highway System. We had very long experience iterating on social systems that worked to punish defectors and encourage good behavior on the scale of a town, and despite the changing nature of the city, on the neighborhood level they worked pretty well, too. What the critical parts were, I’m not sure. Perhaps religion, perhaps sexist social roles; the only confident claim I’ll make is that the system of public shame and reputation were important. What’s critical here is that they broke down in the face of easy cross-country transportation. You could outrun your bad reputation – Kerouac’s On The Road is a description of this outrunning process at work. (Scott Alexander drew attention to this aspect of the story, h/t Scott.)

And the Interstate was basically inevitable, once Eisenhower saw the Autobahn at work. And Eisenhower may have been an unusually good wartime logistician (tangent: why isn’t there a word for that profession?), but not an extraordinary one; what he saw soon, someone else would have seen soon enough. Which gets to my point: the Autobahn was built for war, but by existing and being used, it ensured the mobility that would break down the assumptions underlying our effective social technology.

“AH HAH!” says my hypothetical interlocutor, “But you have tracked this back to another dangerous demotist society, the Nazis!”. This is of course true. But was the Autobahn a fascist idea? Would the Prussian monarchy that united Germany under Bismarck’s hand, or much earlier encouraged the development of the post roads along with the House of Thurn Und Taxis, really have turned down the possibility of massive strategic mobility it offered? No, this was a pure power move; if the military moved faster on well-paved roads, it was going to be built.

So who is to blame for that, then? The inventor of the tank? Henry Ford, for making the car mass-produced enough to be a mass weapon of war? The inventor of the diesel engine? In my view, it’s inseparable; the timing could have been different, but from the moment the steam engine was invented, the car, the transportation network, and the breakdown of the locality-based social system were just a matter of time.

Now, one of the traditional NRX reactions is to say that yes, this is all terrible, and we should therefore go back to monarchy. But short of destroying all the products of the Industrial Revolution, we can’t actually reverse the trends. We do need better social systems, but the old ones could at best be a short-term patch that broke down as quickly as The New Republic when it met The Festival.

Which still presents a problem: How can we recover the social benefits of robust localized reputation and societal expectations without sacrificing the economic, health, and non-bigotry benefits our development has given us or setting ourselves up in a perpetual war against the free flow of information? Here are a couple ideas:

  • Futarchy-like reputation markets. Make influencing reputation anonymous but costly; if they defy your predictions, you lose significant reputation yourself, as well as the numeric currency that would be credibility. (Naturally this will draw some comparisons to Whuffie.)
  • Enclaves a la Scott Alexander’s Archipelago, where different local-scale societies lower the physical cost of movement but increase the social cost of adjusting to a new framework; presumably the Jackson’s Wholes would be outcompeted, but perhaps we’d end up living in some variant of neo-Hong Kong. (Which could be Buck Godot’s New Hong Kong.or Hiro Protagonist’s Greater Hong Kong.. Hong Kong is probably over-emphasized here but hey, maybe it was actually that great.)
  • Narrow-outward approach to enclaves: Build one culture that works and try to make it good enough to be worth keeping social credit in. I call my personal attempt at this project “The Bardic Conspiracy”. Presumably if this works others will try the same, and in the meantime us in our silo have gotten some of the benefits back.
  • Federalism with social teeth: the USA has lots of subunits and many of them have real regional identities, which are not trivial to leave. Can we find a way to amplify that? Maybe split them up even smaller? (Certainly, e.g., California has several subunits with separate regional identities: SoCal, NorCal, and inland. Even New York has ‘NYC’ and ‘Upstate’.)

American Water Policy Is Asinine

(This post contains many puns, most totally unintended. It turns out lots of economic terms about price movement are about water. My apologies, but not enough apologies to get rid of them.)

Libertarians have a rule of thumb: “Anywhere you find a shortage, look back and you’ll find price controls causing it.”It’s almost always true; maybe actually always. What does that have to do with anything? Bear with me a moment.

If you enjoy laughing at libertarians who are self-parodies, read this article. It is a heavy dose of First World Problems blown out of proportion. And yet? He’s got a point.

In the United States, we introduce costly regulations for consumer water use that reduce it by respectable but not large factor; cut 30% or so, optimistically. And all consumer water use is only 2% of the total.

In California, billboards in every city remind you “Conserve Water! It’s a Drought!”. Meanwhile, almond farms use far more water, and alfalfa farms use tons to grow plants that aren’t particularly well-suited to California weather and immediately get shipped overseas when harvested. If we were managing this effectively, there would not be alfalfa farms in formerly-arid parts of California.

In Flint, Michigan, the residents have to buy bottled water at exorbitant expense because their pipes can’t deliver it to them. Meanwhile, nearby, Nestle makes a killing buying water rights and packaging it as bottled water.

These have something in common: Water is being poorly allocated. The residents of Flint, and many other American cities, value clean water far more highly than Nestle does, but Nestle can negotiate for the water rights more effectively and the populace can’t even attempt to outbid them. California urges a small fraction of the water consumption to be cut, because farms are pumping water to dry areas that don’t make sense to farm in to grow crops better suited to other places.

The common thread is a fixed price of water from the pipes that doesn’t respond to supply and demand. The water the farms use is worth more than the farms are paying, but it is going to suddenly run out and then not be available at that price any longer. If the system was stable, the price would rise as projected supply (i.e. rainfall) dropped. In Flint, the managers of the water supply could only make a fixed profit from selling it through normal channels, but negotiating the rights to Nestle had no such barriers and it could float to a market price.

Of course, these restrictions on price exist for a reason, and the goal is good: Everyone should have enough clean water to drink. But in Flint in 2016, and California in ~2020, this isn’t likely to happen. The problem with a price control is that it affects everyone; if the price of water is the same for the consumer drinking it and the farm watering alfalfa, you can’t subsidize one without subsidizing the other. When >90% of the water consumption is now being subsidized for the sake of the other <10%, many enterprises will show up which would be wildly unprofitable if they had to pay what the water was worth but can thrive on the cheap supply of water. When bottled water, even in bulk, is so much more expensive than tap water, companies will turn one to the other and net a hefty profit, and consumers whose tap fails have their water costs skyrocket.

Consider an alternative: Government subsidy of drinking water only. Provide a large tax credit for consumer water expenses (personal use only), perhaps 90% of the cost, and let the prices float free. The water company can raise rates until they have supply managed, and if the citizenry suddenly needs to switch to bottled water they won’t have such a massive jump in cost.

Ritual Inception

What is the most resilient parasite? Bacteria? A virus? An intestinal worm? An idea. Resilient… highly contagious. Once an idea has taken hold of the brain it’s almost impossible to eradicate. An idea that is fully formed – fully understood – that sticks; right in there somewhere.

— Dom Cobb, Inception

EAMES: We tried it. Got the idea in place, but it didn’t take.
COBB: You didn’t plant it deep enough?
EAMES: It’s not just about depth. You need the simplest version of the idea-the one that will grow naturally in the subject’s mind. Subtle art.

Inception, shooting script

Making a ritual or holiday is very much like performing inception. The nature of ritual is to smuggle an idea past most conscious filters, sticking it straight into your audience’s aliefs. As in the movie, once it’s in place you have very little control over it; it is very difficult to displace, and it may have large unintended consequences. (The movie depicts this through the character of Mal.)

Also like inception, the difficulty of imparting the idea scales very quickly with the complexity.

Aligning your aliefs with your beliefs is hard[Citation Needed]. So ritual can be very valuable! But for the same reasons it’s effective, ritual used carelessly can misalign your aliefs severely, and you may not even notice the problem. Because unlike in the movie, a misfired ritual will not pursue you through your dreams with murderous intent. It will just be a new cognitive bias you have acquired, and noticing your own biases is hard.

This is why I am extremely careful in my ritual design, to the point of taking years to actually iterate. Because failure is not necessarily recoverable.

A General Theory of Bigness and Badness

So, a glib take you’ve probably heard is that the problem with Big Government, Big Business, Big Etc. is not the government or the business or the etc. but the “Big”. This is extremely superficial and is essentially elevating a trivial idiosyncrasy of the English language to an important structural principle of the universe, which makes about as much sense as nominative determinism. I think it’s true anyway. Here is my theory of why:

Large groups of people are increasingly hard to coordinate. Getting a group of one person to be value-aligned with itself is literally trivial, 4 people is easy, 12 people is doable for fairly complex values, 50 gets difficult, etc. For a very large organization getting the whole org focused on a complex, nuanced goal is basically impossible.

So the larger an organization gets, the more its de facto goals become simplified, even if it keeps paying lip service to nuanced goals. Theoretically it should be possible to keep nuanced goals at a large scale, but it would take more and more effort per person as you get bigger, and I suspect it would reach “you must spend 110% of your time working on tasks to keep yourself value-aligned”, i.e. impossible-in-practice, somewhere in the 150-1000 employees range.

So that’s part one of the theory: goals get simplified and nuance disappears as the organizations get bigger. By itself this is sufficient to strongly suggest that big organizations are bad in and of themselves. But there are some corollaries which make the case stronger.

Corollary the first: Flatness of hierarchy does matter. The deeper the hierarchy, the more large sub-organizations exist within the large parent organization. The same forces that push the parent org to have a simple goal push each sub-org to have a simple goal. This can explain the rampant bureaucratic infighting in large hierarchical organizations; each sub-org is following its default goal and those come into conflict. This is approximately the Hanlon’s Razor (“Never ascribe to malice that which is adequately explained by incompetence”) analog of  The Gervais Principle.

Corollary the second: “Big” may not imply “evil” but it does forbid “good”. Only simple goals are sustainable for large orgs. But not all simple goals are equally “reproductively fit”. For for-profit companies the most reproductively fit goal is “make a profit”. For political parties it’s “get (re)elected”. For bureaucracies it’s “maintain/expand our budget”. For charities…probably it’s “keep our incoming donations high”, but I’m not confident in that. The bigger the organization, the harder it is for well-intentioned members, even well-intentioned leaders (CEO Larry Page and and President Sergey Brin, President Barack Obama, Chairman of the Joint Chiefs of Staff Colin Powell, …) to keep the organization out of the “low-energy well” that is the default self-perpetuation goal. Organizations which are kept on task for an unselfish core goal will do poorly relative to their peers and tend to die out.

In summary: No manager could possibly keep a large organization on target for a complex goal, and attempting to keep a large org on target for a simple but unselfish goal will rapidly kill the organization. This applies fractally to sub-organizations and super-organizations.

Does this teach us lessons about what to do? Well, it cautions us against trusting that large organizations consisting of benevolent people will act benevolently. It makes me somewhat more skeptical of OpenAI. But nothing specific, no.

No Separation from Hyperexistential Risk

From Arbital:

A principle of AI alignment that does not seem reducible to other principles is “The AGI design should be widely separated in the design space from any design that would constitute a hyperexistential risk”. A hyperexistential risk is a “fate worse than death”, that is, any AGI whose outcome is worse than quickly killing everyone and filling the universe with paperclips.

I agree that this is a desirable quality for any design or approach to creating a design to have. However, I think it’s impossible to do so while creating the possibility for an ‘existential win’, i.e. a good event roughly as good as a hyperexistential risk is bad. In order to create the possibility of a Very Good Outcome, your AGI must understand what humans value in some detail. The author of this page* provides specifics, which they think will move us further away from Very Bad Outcomes, but I don’t agree.

This consideration weighing against general value learning of true human values might not apply to e.g. a Task AGI that was learning inductively from human-labeled examples, if the labeling humans were not trying to identify or distinguish within “dead or worse” and just assigned all such cases the same “bad” label. There are still subtleties to worry about in a case like that[…] But even on the first step of “use the same label for death and worse-than-death as events to be avoided, likewise all varieties of bad fates better than death as a type of consequence to notice and describe to human operators”, it seems like we would have moved substantially further away in the design space from hyperexistential catastrophe.

I find it hard to picture a method of learning what humans value that does not produce information about what they disvalue in equal supply, and this is no exception. Value is for the most part a relative measure rather than an absolute; to determine whether I value eating a cheeseburger it is necessary to compare the state of eating-a-cheeseburger to the state of not-eating-a-cheeseburger, to assess whether I value not-being-in-pain you must compare it to being-in-pain, to determine whether I value existence you must compare it to nonexistence. To the extent we are not labeling the distinction between fates worse than death and death, the learner is failing to understand what we value. And an intelligent sign-flipped learner, if we gave it many fine-grained labels for “things we prefer to death by X much”, would at minimum have the data needed to cause a (weakly-hyper)-existential catastrophe; a world in which we did not die but did not ever have any of the things we rated as better than death. Unless we have some means of preventing the learner from making such inferences or storing the information (so, call the SCP Foundation Antimemetics Division?), this suggestion would not help except against a very stupid agent.

Of course, maybe that’s the point. It seems obvious to me that a very stupid agent does not pose a hyperexistential risk because it can’t build up a model detailed enough to do more than existential harm, but “obvious” is a word to mistrust. Could I make the leap and infer the reversal property? I believe I could. Could one of the senders of That Alien Message, who are unusually stupid for humans but have all the knowledge of their ancestors from birth? I’m fairly confident they could, but not certain. Could one of them cause us hyperexistential harm? Yes, on that I am certain. That adds up to a fairly small, but nonempty, segment of probability space where this would be useful.

But does that add up to the approach being worthwhile?

* Presumably this is Eliezer Yudkowsky , since I don’t believe anyone else wrote anything on Arbital after its “official shutdown”, which was well before this page was created. But I’m not certain.

@docstrings: You have no class.

If you have written any Python code in a shared project recently, you have probably seen a documentation convention like this:

def complex(real=0.0, imag=0.0):
  """Form a complex number.

  @param real: The real part (default 0.0)
  @param imag: The imaginary part (default 0.0)

  @returns: ComplexNumber object.
  if imag == 0.0 and real == 0.0: return complex_zero

This is a good and useful convention for explaining things to future users of the code, if a little verbose. However, you are more likely to have seen class-based code, and there it is not used very well at all. For example:

class CompetitionBasket(FruitBasket):
  """Fruit basket that is entered into a scored competition.

  @param fruits: A dict of fruit names and quantities
  @param scores: A dict of fruit names and scores-per-fruit

  def __init__(self, fruits, scores):
    self.scores = scores
    super(CompetitionBasket, self).__init__(fruits)


  def score(self, relevant_fruits=[]):
  """Return the score of the basket according to the current rules.

  @param relevant_fruits: An array of fruit names corresponding to
  the fruits which are currently under consideration. Defaults to an
  empty list and scores all fruit.

  @returns: Integer score of the basket.

On first glance this looks like the docstring for score follows the same principles. But in actuality this is missing important information, which in a larger class in a complex system would be critical. Both self.fruits and self.scores are critically necessary to the functioning of this method, but neither of them are mentioned. There are advantages to this approach: it is fairly easy to programmatically verify presence of non-empty docstrings for all params and return values a function possesses, and significantly harder to verify presence of docstrings for all non-trivial instance attributes used in a method or all values mutated by side-effects. There are significantly more judgement calls involved in assessing which values need a docstring and which don’t, and it’s plausible that setting the bar for “docstring required” to include these would result in that requirement being more commonly flouted for other methods.

But to consider this and stop is an instance of Goodhart’s Law. It is an argument against mandating them, not an argument against including them wherever possible. For all the reasons we want docstrings (clarity of purpose, maintainability, etc.) we should, wherever possible, include these in the docstring. In some cases, this could result in a docstring 20 lines long; which is clearly a problem. However, in those cases I propose that the main problem is that there is one method which implicitly takes more than a dozen arguments; the object-oriented design has concealed the fact that it is an unwieldy, unmaintainable method and forcing this docstring convention on it brings that fact back into the open.

I would suggest this naming convention:

class Fnord(object):
    def methodName(self, foos):
    """Frobozz the foos according to the Fnord's bazzes.
    @param foos: a list containing Foo instances to frobozz
    @instance_param bazzes: Baz instances containing rules for frobozzing
        for this Fnord
    @class_param quux: Number of times Fnords frobozz each foo