Inside the designer's studio

November 27th, 2007 by ymendel 0 comments »

As is apparently a tradition for him, Rick wrote a tumblebot at RubyConf ‘07. This was done all BDD-style and it was very nice and modular, and apparently many people got very interested in it. From what I remember, Freenode #rubyconf was clamoring for the source(*), but that could mostly be because it was full of lazy people who didn’t want to write their own bots.

I got interested because I like playing with bots and this was something to get me looking at Autumn Leaves and git. It was also a way to poke at Rick’s stuff and make it better, which I’m always ready for. My ideas took his system of a main bot that handles a parser module and sender module and added a third: a filter. This not only gave more power, flexibility, and functionality to the bot, but (just as important) it made the parser cleaner and simpler.

We’ve mentioned that it might be important to let the filters occur in a very specific order, but we held off doing that until it was needed. It wasn’t long until we thought of filters where order would be nice, but not necessary. It took a little longer to think of filters where order was needed.

cardioid Heh, check the quote. Even the “posted by rickbradley” is linked
cardioid I wonder if there’s a way around that
rickbradley I saw that
rickbradley yeah, probably having the posted by rickbradley filter come after the other stuff :-)
rickbradley this is like that plugin ordering problem (the one Rails had)
rickbradley where sometimes you want a to come before b, but other times z just needs to come before everything
cardioid Dude, you said we’d just have the filters say if they need to be before or after something and then sort the graph
rickbradley yeah, maybe I was wrong
cardioid heh
rickbradley in fact, odds are that usually I am :-)
cardioid I don’t know if it’s a bad idea.
cardioid Maybe we just need to do the “active filters” stuff in the config and let that handle the ordering.
rickbradley yeah, that’s a pretty simple way to do it
cardioid I do want to make that filter that clears the title if it’s just someone’s nick, and there’s a specific ordering needed there
rickbradley you know, if we were 37s we’d claim that was aikido or some shit and then post this chat log in a pretty block post
cardioid cleanup title (get rid of shit like : and — at the end), remove nick title, get link title
rickbradley with no mention of “fisting” or “sheboygan side by side” or anything
cardioid blogicx, dogg
cardioid In fact, let me handle this

Okay, maybe the “pretty” part isn’t there yet. Give me time.

[*] for the bot source, “git clone”

Object Daddy

November 26th, 2007 by rickbradley

Man, that’s a lot of scrollbar, wtf?

When I decided to write this blurb – well, actually when I decided, repeatedly, to write the various butchered incarnations of the code that ultimately led to this blurb – I knew 3 things for certain:

  1. Rails test fixtures are evil and a plague upon developers.

  2. That guy (a.k.a., me) who wrote that code, like, two weeks ago – he’s a frickin’ MORON.

  3. There has got to be a better way.

You may only agree with #2 above. I’m here to remind myself about the merits of #1 and #3. Especially I’m hoping to remind myself about #3, but I seem to have plenty of evidence of #2 and at least some about #1, so forgive me if I talk a lot about fixtures.

(for those of you who know I’m a windbag, feel free to skip straight to the place where you can get Object Daddy – or go straight to github.)

The Problem(s) with Fixtures

The Ubiquitous Data Language

Eric Evans on Ubiquitous Language:

A project needs a common language that is more robust than the lowest common denominator. With a conscious effort by the team, the domain model can provide the backbone for that common language, while connecting team communication to the software implementation. That language can be ubiquitous in the team’s work.

Common Wisdom: A lot of us buy in to Eric’s Ubiquitous Language when we model the domain. When we move into testing we cargo-cult the Ubiquitous language along, presuming that our testing tools will work well: we tell stories about our test scenarios, our imaginary users, the Nouns in the system, the mythical system use cases; and then we put those stories into our test data and our tests talk about these storied users and those shared Nouns in our Ubiquitous (testing story) Language. This buys us uber Agile superpowers of badassness and pwnability.

The Harsh Reality: Test data is opaque. Fixtures are invisible and when they’re not, they’re read-only. Test data is coupled like Ike and Tina, and twice as likely to result in a domestic violence call. Stories change, quickly, and are mostly forgotten. Test data never changes. Fixture names don’t even change. “users(:quentin)” in a test doesn’t mean shit to anyone, not even the guy who put it there 2 weeks ago. Any data relevant to a test had better be visible right in the test itself or it might as well be encrypted on the drive.

Not only that. It’s worse.

So there was this billing system I worked on with a group of dudes a while back. Not like a “You have 3 items in your cart, would you like to check out now?”-billing system, nor a 7 model “We’ll keep track of the invoiced items you can send to your customers”-pretty-Arial-fonts-Ajaxy-ends-with-“r”-billing system. No, a 60 model honkin’ huge-ass medical billing system with 837i’s and claims adjustments, insurance panels, contractually negotiated rates for services, service authorizations by moon phase, etc. All that enterprisey nonsense.

I’m not proud. I came to hate that billing system, and I know the poor bastards who still live it hate it today. There’s nothing more pitiable yet less deserving of pity than an organization riddled with more bureaucracy than Sonny Corleone had bullets after his morning at the toll booth. An industrial grade billing system, as the “cost of doing business”, is proof that that pervasive bureaucracy shines both within and without.

Hrm. But I digress.

Back then none of us really knew a damned thing about testing, using our fancy new Rails tools to help us with data management, reducing coupling, or testing in isolation. Heck, I don’t think rspec was even a sly wink between dastels and srbaker yet, and we were hardly stylin’ and/or profilin’ with mocha and/or stubba at this point in time.

No, we got that tests were a good thing, and we duly read our books, watched our innocent peepcasts, and wrote useless tests to make sure that ActiveRecord knew its ass from its elbow.

At some point we tried to test billing and it was a Michael Feathers special – no seams anywhere, a really impenetrable mesh of over 30 models that noone had the skill to cut apart. To make matters worse there were those who knew we should be sawing at the coupling and couldn’t truly get a handle on how, and those who would rather point-and-click their way to loderunner-style glory and just dumptruck a metric asston of data into the database and let the tests assert that all is good with the world.

So the latter bunch wins, because it’s easier to do the wrong thing than the right thing (always – this is basically an Axiom, it should have a name, like “Lambert’s Whorage Principle” or “McPwnd’s First Law”, but I can’t put my finger on a name), and thereby you get a gem like this one:

require File.dirname(__FILE__) + '/../test_helper'

class ScheduleEntryFfsBillingTest < Test::Unit::TestCase
  scenario :ffs_billing
  fixtures :parties, :locations, :cost_centers, :service_places, :activities, :logins, 
           :tuple_domains, :tuples, :client_payer_relations, :credentials, :gl_mappings, 
           :panels, :panel_members, :panel_payers, :fee_matrices, :fl_matrices, 
           :payer_fee_matrices, :pfl_matrices, :pdcrf_matrices, :care_domains,
           :care_domains_objectives, :client_domains, :allowed_cost_centers, 
           :authorizations, :positions, :accountabilities, :commissioners, :responsibles, 
           :accountability_positions, :chart_entries, :observations, :gl_mappings, 
           :gl_mapping_types, :tags, :taggings, :form_sets, :form_items, :choices

  def test_validate_ffs_billing
    # the below frustrates me that it doesn't work w/ assert_difference
    count = ScheduleEntry.ffs_ready_to_bill.size
    assert_difference BillableItem, :count, 98 do
      entries = ScheduleEntry.validate_ffs_billing
      assert entries
    assert_equal count-130, ScheduleEntry.ffs_ready_to_bill.size

Sorry, I probably should’ve warned you that was coming. (There will be more.)

Btw, I’m already rehearsing my “Fair Use Doctrine” defense. I’m not naming names, and I’m not giving away the family jewels (far from it) here. In fact, let me go on record and lie (but only slightly) and say that all the completely abyssmal trashcode I’m going to post was written by me (see point #2 in the introduction). While that’s technically not true, I was around when it was being written and should have, at the least, intervened.

But I did not, and so we can now all learn.

“So, what’s the big deal?”, I hear That Guy in the 2nd row saying. You, sir, should be paying special attention. Ahem.

First, the ‘fixtures’ declaration is a monstrosity. There 38 (if I counted correctly) different fixture files being loaded. Thirty eight. Like 3 tens and an 8. One score, a fortnight, and a Nostradamus quatrain predicting an infestation of pcdrf’s, whatever the hell they might be.

Bonus round: the fixtures are bundled up into a fixture scenario. So, we’ve got a bajillion fixtures, times however many scenarios. There is no way there’s a “story” about the data that’s living in the team, feeding with the group, snuggling up in warm coder laps. Unless it’s a Stephen King story.

We talk with the users and get a bunch of stories about the billing system. We develop our domain model, using our ubiquitous language. We talk about the data that’s necessary, how it will be structured, and how the various stories interact with it. We make example data to help us test each of the various stories. As we get new stories we make new data and we put it into fixtures files. That’s the right way to do it, right? That’s what fixtures are there for, so we’re doing The Right Thing. Fixture data accretes, more fixture files are created. Linkage between fixtures and fixture files begins to happen and grow. Finally we reach the point where when we want to run a test of some of the bigger (mis-designed, partly due to inspiration from other mis-designed systems) parts of billing (“Let’s do a ‘billing batch’!”) we need to gather together the fixture data for the entire billing system to make it happen.

Another thing that stands out about the test above is that the data is completely opaque. We know there’s probably 98 or 130 of something. We know there’s a bunch of models and fixtures involved. We have no idea how much data is in any fixture, how much coupling between fixtures there is, nor what the semantics of any of that data is. Is it good records or bad? Is it representative of real data we might find in production? Is it a dump of data from a legacy system? Or is it just fictional data, or placeholder data, or data not at all suited for purposes of the test? Impossible to say. I was there and I don’t have the foggiest notion still what the nature of that beast was.

So the test then apparently computes how many things are ready to bill, and then it runs something called “FFS billing” (which I happen to know happens to stand for “fee for service”), and then it makes sure that there are 98 fewer billable items and, evidently 130 fewer things ready to bill afterwards.

Frankly, that’s useless.

And, yes, I know the sentiments, “well, at least it’s a test”, or “it counts as a smoke test, and it will detect when there is at least a problem”, and for too long a while I happened to agree. I mean, surely a test is better than no test. But, it’s not true.

Our continuous integration system, during the months and months that this test existed (… oh, not always in this particular form, but this hydra’s heads were all horrible and they, to toss metaphors like a salad, were definitely limbs from the same stump: if the different incarnations of this test kissed you’d be right in shouting “incest!”, and, yes, you’d still feel like you’d seen something out of Deliverance) would fire off every other day or so complaining that this test was failing. Always with an informative error message like:

expected: 142

got: 143

There were various times, too, when the test would fail after a commit to a wholly unrelated part of the system. Even better, sometimes it would fail when only the project README file was updated. There was something in the data or the underlying computation that was time-dependent or maybe maliciously random.

And so periodically some brave soul would suit up, grab a lantern, a cat-of-nine-tails, and a week’s supply of shrimp-flavored Ramen and head into the project looking to slay the foul FFS billing beast “once and for all”. Days were sacrificed to the dragon. Real days and “programmer-days”.

It wasn’t until we finally had learned enough about life and about testing that we were able to say, with confidence (and with feeling), “This test is not only wasting our time, it’s worthless, and most of the code under it is worthless” and excised it from the tree altogether that we were out from under its yoke.

OK, fun and games, but there’s a point. The reason this soul-sucking test was so wretched was not (as it would appear on the surface) simply that we didn’t know how to write a decent unit test. The reason this test was so costly was that after taking Eric Evans’ (in my opinion, unimpeachable) advice about Ubiquitous Language, we also fell for the Common Wisdom and put our chips on fixtures. We thought fixtures were a great idea, they are The Rails Way, and clearly putting our data into them is The Right Thing.

In fact, the opposite is true, and the Harsh Reality came home for us. Not only did we not know much about testing (and news at 11, guys, most developers don’t know jack about testing), but our trust in fixtures and our focus on a data-centric, fixtures-centric view of the world is the primary reason we ended up as street walkers on the corner of FFS and Billing.

Unreadable readability

Let’s take a look at some slightly less abyssmal, but still utterly wretched unit tests. Here I’ve culled a pair of unit tests out of a suite. There are far fewer (though still what I would consider 8 too many these days) fixtures in evidence, and the tests are a bit more varied, and, in fact, positively swollen with the promise of possibly not being completely jacked up:

class ScheduleBookedTest < Test::Unit::TestCase
  fixtures :schedule_entries, :parties, :locations, :cost_centers, :service_places, 
           :activities, :tuple_domains, :tuples
# [...]

def test_booked_without_allocated
  assert_raise(ActiveRecord::RecordInvalid) do
    entry = ScheduleBooked.create!( 
      :staff_id => parties(:jrbradle).id, :status_id =>,
      :location_id => locations(:location_566420).id, 
      :cost_center_id => cost_centers(:cost_center_30).id,
      :begin_time => 2.years.from_now, :end_time => 2.years.from_now,
      :service_place_id => service_places(:first).id,
      :activity_id => activities(:activity_12).id,
      :client_id => parties(:client1).id)

# [...]

def test_find_by_staff
  # see how many entries rick has this week
  today, monday = example_days
  entries = ScheduleBooked.find_by_staff(parties(:allen).id, monday, 5.days.from_now(monday))
  assert entries
  assert_equal 17, entries.size
  entries.each { |e| assert e.kind_of?(ScheduleBooked) }

  # be uber specific when searching by a staff member
  entries = ScheduleBooked.find_by_staff(parties(:allen).id, monday, 2.days.from_now(monday), 
    :conditions => ['activity_id = ? AND location_id = ?', activities(:activity_89).id, locations(:location_566402).id])
  assert entries
  assert_equal 6, entries.size
  entries.each { |e| assert e.kind_of?(ScheduleBooked) }

Well, so much for potential. However, these are interesting for different reasons than the earlier test.

Here the accessor methods for fixtures (e.g., “parties(:jrbradle)”) are in heavy use. In the first test there is almost no line without a fixture accessor, while the second test makes frequent use of them as well.

In the first test two things are striking. First, the fixture accessors are used to pull objects out of fixtures, get their id values, and hand them in as data fields to the ScheduleBooked constructor. Second, not only are the fixture labels mostly meaningless (at least to me, as a reader, even one familiar with parts of this system and with the domain model it incorporates), but many of them seem downright reader-hostile, as if the person maintaining the fixtures imparted no meaning at all to any of the fixture records.

It prompts the question, “If all you need is a warm object, isn’t there a better way to get one than fishing around in the fixtures for an arbitrary one?” But this quickly gives way to doubt, as in, “What if these particular objects *do* have some important properties, and the handles just don’t help describe them?”

Could “locations(:location_566420)” have some important characteristics critical to this test? It seems unlikely, but then again it seems sadistically probable the more one stares at the code. What about service_places(:first)? Maybe activities(:activity_12)? So many questions leap to mind (actually, the question I keep asking myself when reading this is, “Who wants a trip to Fist City?”).

The second test is similar, but a bit more subtle and sinister. We see the same arbitrary fixture accessor calls as in the previous test (as well as the use of the “parties(:allen)” accessor, even though the comment says to see how many entries I have this week – that’s just for the lulz, one hopes). But, look closely – “ScheduleBooked.find_by_staff”, when it is working properly, is returning either 6 or 17 objects, depending upon how it’s called. Are those objects coming from the test code? Nope, they’re coming from the fixture data.

If the fixture data is as expected, and the code is “working properly” (whatever that might mean, as the test gives us no insight) then the sausage grinder will spit out 6 or 17 little nuggets for us to play with.

The bottom line is this: no matter how well you rewrite these tests, if you insist upon accepting the Common Wisdom cited earlier and use the fixture files at hand you will never be able to make those tests useful, descriptive, or uncoupled from the opaque, invisible, tangled, read-only fixture mess that’s lying on the disk waiting to beat you about the neck with a cricket bat.

Fixtures appear to be an easy tool to bootstrap productivity, to help you grow into your application as it gets larger, to give some readability to your tests, to aid in maintainability. But the inherent nature of fixtures is to be unmaintainable themselves, to be essentially unreadable, to be a source of coupling, and to make your tests fragile and unmaintainable.

They are not even a boon when “just starting out”. There’s the obvious realization that fixtures as an early crutch (like scaffolding or simply like “throwaway code” that never reaches the wastebasket) never really go away and they become part of the architecture of the later larger system. But, more importantly, fixtures even at the earliest stage bring the Harsh Reality to all but the most disciplined of developers, who can fight their attractions:

def test_user_owes_money
  assert_equal 2, users(:quentin).owes.size
  assert_equal 101.00, users(:quentin).owes_amount
  assert users(:arthur).owes.empty?
  assert_equal 0, users(:arthur).owes_amount

And, if truly disciplined enough to control fixtures, isn’t there something else you should be using?


I’ve written about this before with a couple other folks. Fixtures are slow as a thick chicken gravy through a pair of panty hose, and it can be difficult to get out from under fixtures.

Want the post-mortem, since there wasn’t one written? (Note that I left that project at some point, but I still get lots of stories about how things have gone). Tests got faster when we did that, then they gradually got slow again. The tendency to stay with a Common Wisdom-style love of big opaque data sets is basically impossible to break once it sets in. Work was done to actually get test runs to not touch the database and that made significant progress in speeding things up, at the expense of digging out from under old database-married tests.

The data management problem, in general, though became worse over time. I think they may have it mostly under control, but it consumed a lot of time (including some of my time before I left). Ultimately they insisted on having not only development data (a small set for developers to click around with in the browsers on their laptops) and production-related data, but also a set of “test data” for running tests. I.e., fixtures by another name, with home-grown tools to manage them. The Common Wisdom deeply embedded.


While I am pretty thoroughly convinced that fixtures emerged from the 8th, if not the 9th, Circle of Hell, I seem to have a lot of comrades-in-arms who are intent on fixing “The Fixtures Problem”, as well as the company of a lot of high profile developers (folks who, unlike me, probably don’t run ‘svn blame’ on their own code to figure out who that idiot was…) who apparently just don’t do fixtures any more.

Jeremy Kemper (aka “bitsweat”, aka Rails Core guy), from what I recall, just makes a database snapshot of some reasonable data and just shoves it into the test database when it’s time to run tests. No fixtures, no overhead, just go.

Jay Fields (aka “thoughtworks guy”, aka dude who is doing a lot of good work on test craftsmanship and overall coding practices) doesn’t even use the database if he can get away with it, much less wading around and in the fetid fixture sump.

People keep trying to “fix” fixtures or provide alternative means of getting test objects instantiated. Most recently, Rick Olson (aka “technoweenie”, aka another Rails Core guy) put together another attempt at getting around the fixtures shortcomings with model_stubbing .

Towards a solution

There have been a lot of people looking for answers to the problem of how to generate usable objects for use in testing, inside and outside Rails and its particular idioms. Of particular interest are a number of approaches I see as on the right path, but which I don’t believe have yet “solved” the problem. In particular:

Constructor Helpers

These are methods defined in test classes or test helpers to assist in constructing custom objects for use in tests. These are typically hand-written one-off methods, and often center around uesrs, sessions, or authorization. An example:

def create_user(options = {})
  User.create({ :username => 'testie', :email => '', :password => 's3kr17' }.merge(options))

These methods are quite helpful, ignore fixtures altogether, but do not protect the caller from having to worry about concerns such as uniqueness of attributes such as user names or email addresses. They also take no advantage of metaprogramming, inspection, or automation to provide a more general facility.

Object Mother Pattern

The Object Mother is a class which encapsulates the creation of objects from the ubiquitous stories of the data for the system. The Rails fixtures + fixture accessors tools are actually an implementation of Object Mother. While there may be some advantage to re-implementing Object Mother for Rails, the problems with opacity of test data as well as coupling, etc., make me pessimistic.


I’m not sure what else to call these, so I’ll use “stereotypes”, which was the term I used when I was experimenting with them. Basically, a general method is created which purports to be able to create an instance of an arbitrary class during testing. To accomplish its job it relies on a “stereotype” for that class – a hand-written method which knows how to build an instance of a single class. Here is part of a ruby implementation of such a beast:

      def stereotype(args = {})
        raise 'Must specify a class to stereotype' unless klass = args.delete(:class)
        method = "create_stereotype_#{klass}".to_sym
        require "#{MOCK_PATH}/#{klass}_stereotype" unless klass.respond_to?(method)
        self.send(method, args)
        def create_stereotype_chart_entry(args)
          ce ={
            :employee            => custom_party(:class => :employee),
            :client              => custom_party(:class => :client),
            :form_set            => stereotype(:class => :form_set),

          ce.form_set.form_items.each do |item|
            ce.observations << stereotype(:class       => :observation, 
                                          :chart_entry => ce, 
                                          :form_item   => ce.form_set.root)


These suffer from scalability problems (i.e., each stereotype is written by hand), as well as problems with validation – it can be difficult to find an easy way to guarantee that unique attributes are unique, that formatted attributes (like email addresses, social security numbers, etc.) can be generated indefinitely in the right format. There seemed to be some promise but these were cumbersome.

Mock objects

There is a large body of literature on mock objects, mocking, stubbing, etc., so I will not belabor the description (Google is your friend). Mocking can be quite effective, particularly for isolation testing (what would normally be called “unit testing” but Rails has polluted that term’s definition by pretending the term applies to certain types of classes not at all in isolation). When we need to test classes together, especially in a framework such as Rails where the domain model is intertwined with the persistence layer, it becomes difficult to rely solely on mocks to play the roles of the various objects our tested classes will interact with.

Model Stubbing

This is a very recent attempt by Rick Olson (technoweenie) to provide a means of specifying model instances for tests without relying on fixtures. I particularly like the fact that all the information is available in the test file so that the opacity problem is solved. From Rick’s current README:

      require 'model_stubbing'

      class FooTest < Test::Unit::TestCase
        define_models do
          time 2007, 6, 1

          model User do
            stub :name => 'bob', :admin => false
            stub :admin, :admin => true # inherits from default fixture

          model Post do
            # uses admin user fixture above
            stub :title => 'initial', :user => all_stubs(:admin_user), 
              :published_at => current_time + 5.days

        def test_foo
          @user   = users(:default) # default user stub
          @admin  = users(:admin)
          @custom = users(:default, :age => 25) # custom attributes, 
                                                #but not equal to @user any more

          @post   = posts(:default)
          @post.user # equal to @admin above

          current_time # stubbed to be 6/1/2007 using mocha or rspec

On the downside, this suffers from some of the same problems as Constructor Helpers above in that it isn’t readily scalable, and, further, there is little reuse of definitions for models and it doesn’t seem to allow for ready refactoring under validation changes in models. That is, if model instances are being stubbed in many tests and then validations for that model change it is likely that many test changes will need to occur.

Inline Fixture Validations

This is a technique by Jonathan (sorry, I don’t have his last name) at It is similar in some respects to Model Stubbing, but inlines the declaration significantly by piggypacking on the existing “fixture” method name. It is more terse but has essentially the same pros and cons as Model Stubbing.

require File.dirname(__FILE__) + '/../test_helper'

class PersonTest < Test::Unit::TestCase
  # We don't need to load the people fixtures anymore
  # fixtures :people

  def test_should_require_unique_email_address
    # The fixture method will load in the data we need for this test
    fixture :person, :email_address => ""

    # [...]


An exemplar is, to me, almost all the way there. It’s a significant step down the road towards having a decoupled, in-one-place, easy-to-use, terse, means of creating valid AR models for use with Rails. From the article discussing them, they look like this:

class User
  class << self
    @@exemplar_count = 0
    def exemplar(overrides = {})
      @@exemplar_count += 1
      with_options(:username => "user#{@@exemplar_count}", 
                    :email => "user#{@@exemplar_count}", 
                    :password => "fredisabadpassword") do |maker|

    def create_exemplar(overrides = {})
      returning(exemplar(overrides)) {|user|}

    def create_exemplar!(overrides = {})
      returning(exemplar(overrides)) {|user|!}

setup do
  @first_user = User.create_exemplar!
  @second_user = User.create_exemplar!
  @taggable = ...

Pretty nice. The downsides are that you end up having to declare them in the various model classes repeatedly (i.e., some metaprogramming is wanted), and they don’t provide a general solution for uniqueness/formatting validations (though some creativity is almost always going to be desired). Another downside is that they are injecting a testing concern into the model classes, which arguably may be a bad thing. In short, though, this is pretty good stuff, and just a few improvements away from something that I personally think would be really useful.

Object Daddy

Interestingly, I’d put this problem down for probably 3 months before I woke up one night, hacked up something and shoved it into a pastie (this one, actually) and then started doing some research to see if I was reinventing the wheel. When I came across Exemplars I thought for a moment that Piers had already done everything I was trying to do, and then I realized that what we were doing was different enough that I should probably BDD a full version and see if it is viable.

So I went ahead and did that, and the end product is a library / Rails plugin I’m calling “Object Daddy”, for obvious reasons. It’s closely related to Exemplars, Object Mother, Inline Fixture Validations, and Stereotypes, as described above.

You can pull the most recent version of Object Daddy via git:

% git clone git://

Who’s Your Daddy?

Drop Object Daddy into the vendor/plugins directory of your Rails app. Now every model has a .generate() method which will create a new instance object of that model. If the model in question has no constraints which would make fail then you’re set. Those models which do have stronger constraints will want “generators” for those attributes which need special care.

Generators are declared with the ‘generator_for’ method, which can declare a generator for an attribute in one of three ways: by specifying a block, by providing a method name, or by specifying a class. The job of the generator is to provide a value for that attribute when .generate is called for the model.

Here is an example of all three types of generators (from the README):

class User < ActiveRecord::Base
  validates_presence_of :email
  validates_uniqueness_of :email
  validates_format_of :email, :with => /^[-a-z_+0-9.]+@(?:[-a-z_+0-9.]\.)+[a-z]+$/i
  validates_presence_of :username
  validates_format_of :username, :with => /^[a-z0-9_]{4,12}$/i

  generator_for :email, :start => '' do |prev|
    user, domain = prev.split('@')
    user.succ + '@' + domain

  generator_for :username, :method => :next_user

  generator_for :ssn, :class => SSNGenerator

  def self.next_user
    @last_username ||= 'testuser'

class SSNGenerator
    @last ||= '000-00-0000'
      @last = ("%09d" % (@last.gsub('-', '').to_i + 1)).sub(/^(\d{3})(\d{2})(\d{4})$/, '\1-\2-\3')

it "should have a comment for every forum the user posts to" do
  @user =
  @post =
  @post.comments << Comment.generate(:title => 'fr1st p0s7!!11')
  @user.should have(1).comments

Object Daddy is aware when associations between models would require Foo.generate to have an associated Bar instance to be valid. It will automatically call Bar.generate at the appropriate time in order to make the new Foo valid.

Object Daddy allows you to override any attributes on the model when calling .generate. This allows you to make meaningful test objects rather than just getting the plain-vanilla generated objects at test time.

And, borrowing terminology from Piers, while Object Daddy can declare generators on the very model classes themselves, it also supports (and creates a directory to hold) “exemplar” files where .generate will look for declarations for models. So if you’re calling User.generate, any generators found in test/exemplars/user_exemplar.rb will be run for you, allowing you to keep your test concerns out of your model files.

Feedback is welcome. Hope this helps,

Rick Bradley (

Update: New ways to create generators, working with STI, and oh making some of it work in general