The Next Big Leap
In my view, FITness, Cucumber and other acceptance test frameworks represented a big leap in software development effectiveness because they allow coders to get intimately involved with (and in some cases replace) Product Management/QA roles. Extreme Programming has developers working directly with product owners and capturing their requirements as "acceptance criteria" on story cards. Eventually, the desire to automate acceptance testing and best practices for how to express acceptance criteria led to GIVEN-WHEN-THEN support baked into actual testing frameworks such as Cucumber.
That's okay, but it's not nearly enough. Let's talk about how the next big leap in effectiveness is going to happen. Assume the following story is true:
- Given a specification for a feature that has not been demonstrably linked to business value,
- When a developer works to deliver an implementation of that feature,
- Then the worth of the implementation work is indeterminate until some future time when its business value is validated
Most feature requests that I've ever worked with in the early stages of a project, are ideations of the product visionary/designers/entrepreneur, based on intuition rather than customer feedback. And that's perfectly normal and acceptable to me. The more innovative a creation, the more work you usually have to do before you put it out in front of customers and get meaningful feedback. But the impact of those features on key business metrics are assumed, if it gets any explicit thought at all.
Aside: As for the implementation of those feature requests, I'm going to assume that being faithful to the specification is implicit, as it should be when we're talking about professional programmers. (Not working with professional programmers? GIGO)
Given those assumptions, I think what myself and others have realized as a problem with so-called "high-ceremony" adherence to methodologies such as BDD (Behavior-Driven Development) is that by obsessing over correctness of the implementation as faithful to the spec, it becomes easy to lose sight of the true value of our work: to deliver business value, as measured by key business metrics. We may obsess over crafting pristine, bulletproof, maintainable code, and yet its worth is indeterminate at best. And why, as developers, would we be satisfied with that!
"Measure as much as you can - no feedback == no direction" Hugo Rodger-Brown
How can we establish that our work is not a waste of time? One technique that I hope to work on in near future is the specification validation of business value right in my acceptance criteria as part of its test cases. Said differently, I am looking for an easy way to specify non-functional requirements as part of acceptance criteria in such a way that it is automatically tested. It would be an extension or evolution of the current BDD toolset, not a replacement. You can even use de rigeur GWT syntax:
Given the implementation of email notifications,
When 100 customer events have been processed,
Then the mean time between events metrics should improve by 30%
or better
Then the net revenue per event should improve by 30%
or best
Then the net revenue per event should increase by $100
I care more about business value than faithful adherence to a particular functional requirement. My specifications should reflect that mindset. What makes this whole approach non-trivial is the challenge of establishing credible causal relationships in my acceptance criteria, given the multi-variate nature of dozens of interleaved features in a particular application and the way that key business metrics overlay many features at once.
Despite the challenges, I think the concept I'm describing is the next big leap. The first challenge is to get people talking about it and definition of goals. The second is to design a methodology and technical framework for accomplishing those goals.
What do we call the whole thing? Business-Driven Development? (no) What do we call the specs? Can you write specs against metrics in a tool such as KissMetrics? It's hard (or at least has been for me) to figure out what metrics I need until after I've written the features that yield measurement data used to calculate those metrics. How do I fix or at least improve that workflow? How do I describe the discipline needed to think of the metrics prior to specifying a feature?
Right now all I'm trying to do via this blog post and my public comments is to start a dialogue. I'm not trying to make any blanket statements about the correctness of BDD/TDD or testing in general, just pointing out that there's lots of room outside of that particular box for innovation.
As for my comments at RailsConf and Goruco, it seems to me that at a startup that has not achieved market fit, it's wrong to spend much time ensuring that a given piece of software matches its functional specification. (Specs that often won't exist, because the same person specifying the functionality is the person writing the code, in which case I'm saying don't bother writing the specs. Just "cowboy" it and do a complete rewrite later if it proves necessary.) Once you have your MVP, write specs for how particular code changes will impact key business metrics. Regressions will probably fail to improve key business metrics.
The question is not "should I test (or spec)", but rather it is "what should I test?" As an industry, to the extent that TDD/BDD has caught on at all, I don't think we're testing the right things, with our emphasis on correctness of implementation. When we figure out how to rigorously test for business value it's going to be the next big leap. That's a big green field waiting for all of us to play in and I want to be one of the first.