I geek out on statistics. I can tell you exactly how many times I’ve used a particular airport and my distance traveled down to the mile, or my field goal percentage for my high school basketball career.

Metrics are fun, to a point. But in my years working in marketing technology, I wonder if companies are too often blasting beyond that point in chasing short-term carrots at the expense of long-term viability. Being “data driven” has become a badge of honor in today’s economy, where companies like Google, Netflix and Booking.com have used continuous optimization as fuel to massive growth.

There is a natural arc of optimization along which a product or service improves–features, marketing, communication, reputation or otherwise–creating a better user experience. And we are generally well aware of the point of diminishing returns, beyond which additional efforts to optimize are fruitless. 

But far too often, we fail to assign the appropriate risk that optimization efforts drive adverse outcomes that ultimately eliminate the user happiness the product or service exists to create; not only diminishing returns, but a negative boomerang effect.

There are two reasons for this: a singular focus on transactional optimization and a failure to understand psychological value.

Optimization as Value Extraction

It is important to understand the context in which companies frame optimization, which is essentially through transactions. Data scientists see human behavior as data points, conversions or non-conversions, which of course we are not. It is possible to optimize for experience, but this is typically not measurable in the short or medium term, making it an afterthought when incentive structures are driven by quarterly or annual results.

Take Booking.com, for example. In my 2.5 years there, my one consistent frustration was a frequent failure to apply common sense to data-led decisions. The company prides itself on experimentation, but with autonomous teams generally given carte blanche and measuring transaction vs. no transaction, well-intentioned optimizations quickly evolve into quagmire. 

In the below example, I had 109 options to book a room in the same Tokyo hotel:

 

This happens in many contexts when the only metric that matters is short-term conversions: overly aggressive marketing that saps brand integrity, oversaturation of a market that eliminates scarcity value and overly stalker-ish predictive algorithms, to name a few. 

Overlooked Psychological Value

By conventional tech industry logic, a secretary, a doorman or even a pilot (to some extent) can be technologically optimized out of existence. There are already high-functioning scheduling tools, automatic doors and most flights are completely controlled by autopilot.

But these solutions in and of themselves fail to cover some of the other essential responsibilities of these professions. A secretary is a signal of status. A doorman prestige. And a pilot makes the humans on board much more assured of their safety.

This begs an interesting thought experiment for cohort-based courses to consider as they scale. I believe in the massive potential of EdTech to lap traditional education in learning efficiency and even intentional community building. But, looking up from my laptop at the basketball chaos of the NCAA Tournament (as I write this during March Madness), it makes me consider how a course like Write of Passage could serve the psychological equivalent of alumni wildly cheering, living and dying by three-pointers and slam dunks they have nothing to do with.

Very few things in life serve a singular purpose, and to many folks, lifetime membership to the “in” crowd cheering for a school may subconsciously hold as much value as the education itself. Treating everything as an optimization problem often fails to consider these kinds of second- order values of a product or service.

Finding the Right Balance

As with most things in life, balance is everything. Habitual optimization requires self restraint, much like drinking alcohol or eating sugar. Unfortunately for the data crowd, there’s no algorithm or big dataset that’s going to tell you when you’ve gone to the well one too many times. 

It takes some intuition, it takes some EQ, and perhaps companies should begin to assign team members tasked with considering human things. Data is great at assessing the intended, but considerably worse at understanding more psychological second-order effects. 

Every executive has an insatiable thirst for sprinty wins; the marathon winners will be those who can balance experimentation with a ponderance on the what ifs that short-term data won’t catch. This approach would assign the appropriate level of risk to an extra lemon squeeze, meshing the art in addition to science needed to predict the point when user delight might start to boomerang back.