Metrics are as old as numbers themselves. The first software programs ever written, or more accurately wired, did little more than reveal metrics. Metrics hide in data, and in the early days, the primary use for building software was to process and learn from the vast amounts of complicated data. Then, somewhere along the way software evolved from simply telling us things to helping us do things… and we seem to have forgotten, at least with regard to design, how to listen to what our software is telling us.
It’s intriguing that our industry fully embraces using data to help us construct better software and to be more effective at selling that software — but it has yet to evolve a structured process for using data to design better products. To be clear, by design, I am referring to the entirety of the product, not just it’s look-and-feel, user interface, branding, etc. Those elements are critically important, but if the underlying functional design is flawed, we end up with a very pretty and woefully ineffective product. Some products just feel right, and scarily, for most of these products, that’s purely by accident.
I should mention, engineers have never forgotten. They naturally began to embed technical metrics into their products — counters to measure code performance, memory usage, errors, etc. If an engineer wanted to know something about the execution of his program, he would code and log a metric. More metrics would yield more insights, which would, in turn, yield better performance and stability.
And like the engineers, the marketers never forgot. There isn’t an ad campaign running anywhere that isn’t measuring something — at least not an effective one! Print ads are placed based on readership metrics provided by publishers, broadcast ads are bought and placed based on metrics provided by Nielson and Arbitron, digital performance is measured and ranked by Google, Comscore, and countless niche players. There is simply no shortage of data available to help improve the effectiveness of their campaigns.
The process of creating great software must be deliberate. The software’s design, from functional to user experience to visual, should be thoroughly vetted and intentionally selected. We are all familiar with the success of products brought to market by iconic industrial designers like Jonathan Ive and Yves Béhar, but who are their software design contemporaries? A Google search for “famous software designers” yields little insight. There are plenty of famous programmers, thought leaders, technology visionaries, and the like, but I failed to find a single person classified as a bona fide software design pioneer, and unfortunately, that’s not really a surprise.
Over my career, I’ve had the pleasure of helping design and develop dozens upon dozens of great software products. On the other hand, countless additional products were never built because, simply put, the champions of those products, my customers, failed to recognize and embrace the value of the effort required to properly and iteratively design a great product.
Designing great software is a process, and like constructing and selling software, it is a process that should be grounded in the reality of hard metrics. Until a feature is implemented and measured, we are generally just guessing at how well the feature will perform, if it’s indeed useful, and whether or not anyone will actually care. Building software is incredibly time consuming and expensive, yet as an industry, we think we need to insist on designing, planning, and budgeting for software products/projects in their entirety before testing their viability in the market. To me at least, that is totally backwards thinking. Personally, I don’t want to design the entire product first. I want to spend as little time/capital as possible, introduce it to the marketplace, measure it’s success (or not), and effectively engage the market to tell me what will make the product viable.
In the past decade, we have made great progress evolving our software construction processes (think Scrum) to be more reliable, measurable, and predictable. But we are just at the precipice of learning how to achieve those same benefits in the product design process. Champions of this cause, like Eric Ries and the Lean Startup movement, are demonstrating that iterative-measured product evolution can be extremely effective at helping startups achieve product traction. This is a great start, but the tools and mechanics of the practice are still very much in their infancy and generally proprietary to the practitioners themselves.
We still have a long way to go before data-driven product design is fully integrated in the development process. We have to develop tools and techniques to effectively instrument and measure the design process. We need fluid deployment platforms to allow us to effectively engage users and measure the effectiveness of new features. And, most importantly, we need business and management models that align more closely with the actual design/development processes we are evolving toward.
I’m thankful that progressive thinking companies like the one I work for are pushing the envelope with this issue on a daily basis. We are encouraging our customers to design and build less software faster. We are helping them to explore ways to move new ideas to market more quickly so that collectively we can instrument and use metrics to guide the product forward. We are exploring new cost structures that give our customers more control over the product value chain. And on and on…
So, the next time you have a great product idea, pull yourself away from the “design” and challenge yourself to construct a measurable market test instead. Teach your team to think about metrics first. Engage them to stop planning for everything and to just simply build the smallest meaningful thing they can measure. Teach them to listen, learn, and react — you may be amazed at what you find!