User disengagement

06 May 2022

Over time, I have come to realise that the term user “engagement” in software, more often than not, is a thinly veiled proxy for user entrapment, whether intentional on the part of software designers or not. The trend of shallow user numbers and “engagement” metrics taking centre stage in business valuations has created perverse incentives at a massive scale for software to indulge in unethical practices of “engaging” users at any cost, with no respect for the limited and fast eroding levels of cognitive resources. Speaking of cognitive resources and attention, a significant amount is lost daily to the juggling of apps all attempting to steal as much attention as possible. How much attention can a human meaningfully divide between an ever increasing pool of software, all racing to the bottom in their attempts to build a “captive audience”?

The idea of user “engagement” on its own may not be inherently unethical, but it often tends to seep into this realm, where software is specifically designed to regularly, forcibly, and artificially attract user attention for increasing “engagement”, even when the said software’s utility is completely out of the user’s mindspace. This is common in the form of “gamification”, shallow feature releases aimed at creating illusions of improvement, deluge of promotional and product update notifications and e-mails, manufactured red-dot cues on the UI that feign urgency, “exciting offers” that are “personalised” … My mobile network provider’s app that I occasionally use to track and pay phone bills wants to offer me a “personalised experience” by tracking my actions outside of the app. Not to mention, it also regularly attempts to push random “exciting offers”. It is only slightly less annoying than the widespread, poorly timed, “SUBSCRIBE TO OUR NEWSLETTER” popups that plague already ad-ridden websites.

This is not a new phenomenon though. The browser popup window menace of the late 1990s and 2000s was an unruly precursor to this. Occasionally, when I come across a browser without an adblocker installed, I shudder at the unholy site (pun intended). With the exponential growth in software “innovation” and money, much of that unruliness is now legitimised and institutionalised. Multi-billion dollar mega corporations to small startups now deploy sophisticated technology to steal and monetise user attention via their software. User “engagement” has become an integral part of the software design process.

There is great irony in software specifically created for stealing attention, profiling user behaviour and mining private data to sell ads, touting user centricity and showcasing sophisticated technology for maintaining privacy. Somewhere in an alternate reality, cigarette packs are bundling nicotine patches encouraging cigarette disuse, running marketing campaigns extolling their user-centricity.

What exactly is user “engagement”?

If one disregards the marketing drivel, any explicit attempt to maximise user “engagement” boils down to forcing users to look at and click around on software screens, given that the significant majority of end-user software are GUIs. That is, screen time, a term that only signifies eyes-on-screen time without addressing utility or usability. If that was not the case and users used a piece of software when there was a need and dropped off when there was not, if that was the natural metric, there would be no need to measure and maximise user “engagement”.

Thankfully, screen time is starting to carry a negative connotation owing to the increasing awareness of its negative effects, not only cognitive but physiological. What a user does on the screen may not matter at all as long as they are doing something and staying “engaged”. As the adage goes, if they are “engaged”, surely, there must be a way to monetize them in the future /s. The holy grail of engagement metrics, captive audience, and continuously growing business valuations built upon them.

Speaking of screen time, if a user wants to solve a specific problem when they have that specific problem, they would use the appropriate software to solve it, and upon solving the problem, they would leave the screen. They would come back again when there is a need to use the software, whether it was several times a day, once a week, or once a year. Any attempt on the software’s or the software designer’s part to “engage” the user beyond this is most likely artificial in nature and irrelevant to its utility; a dark pattern. Thanks to smartphones, it is an act of theft that cuts straight through to the user’s private life and personal time with little respect for the user. It would be unacceptable for businesses to incessantly ring doorbells, peep through windows, and plaster ads on people’s doors, and yet, intruding in people’s lives via software in a multitude of ways is somehow legitimate. Of course, acknowledging this reality and designing software accordingly may not fly with business growth projections, funding prospects, or never ending revenue maximization attempts though.

The question here is fundamental, even visceral. When there are businesses with no viable business models burning inordinate amounts of money acquiring and “engaging” users in a race to the bottom, stealing user attention in ever increasingly dark ways to drive “engagement”, is it even possible to build software meaningfully and compete organically in highly inorganic marketplaces, where the heaviest wallets can drive competition out, irrespective of the quality and utility of software?

Our experience at Zerodha over the last decade gives me the conviction to say that it is indeed possible. Not easy, but possible. That is how we have built software and services—slowly, steadily, and organically. On the flip side, how many fail-fast startups deploying huge amounts of external funding and dark patterns manage to build useful software and sustainable business models? If one thinks, that may be even more difficult, competing for impossibly large and artificial outcomes.

How one defines success plays a critical role here. If one sets out writing software or starting a business with the explicit goal of chasing arbitrary numeric goals (becoming a “unicorn”, chasing indefinite revenue growth, user numbers …) far beyond the realms of sustainability, there is a high likelihood of the resulting products naturally optimising for those goals, creating monstrosities of user-hostile dark patterns. Realistically, very few businesses succeed and very few products ever make it. The graveyard of good and bad software is vast. So, really, it comes down to good old ethics one chooses when building products, not UI frameworks.

If one sets out to write software combined with a viable business model that meaningfully solves problems for end users, with the conviction that its usefulness will organically acquire users and slowly create a sustainable business, the likelihood of decisions that create user-hostility reduces drastically. This clarity brings on things that truly matter—quality and utility of software.

Disengagement

In hindsight, the business practices and end-user software that we have developed at Zerodha over the last decade, employ a philosophy that I would like to call user disengagement. At its core are two tenets:

  • Only do what is truly useful and meaningful to end users.
  • Don’t do unto others, what you don’t want done unto you.

In this philosophy, the term engagement has little to no relevance. Business and product decisions are based on utility for the end user with common sense trade-offs for the business to be sustainable, and never on arbitrary “engagement”, “growth”, or valuation metrics. In some contexts, utility can be quantified and measured, while in others, it is qualitative. In the end, such decisions have to have the right balance between domain expertise and intuition based on experience, objective decision making, and user needs and wants (which can be mutually exclusive often).

User disengagement is not the act of forcing users out of software, obviously. That would be user-hostility, and ironically, it is often user “engagement” practices that breed user-hostility and dark patterns in software. Disengagement here is completely foregoing the idea of user-screen engagement as a principle of software development and instead focusing on utility and meaning for the end user.

This view transcends software UIs and drives major business and organisational decisions. It is not possible to build user-centric software that focuses on utility without a business itself incorporating these principles. This philosophy drives pretty much everything at Zerodha. Some examples:

No advertising

Zerodha has never advertised its products or services. No online or offline ads. No paid placements. No marketing team. $0 marketing spend. Our central thesis and conviction is that if our products and services are truly useful to end-users and are fairly priced, users will choose the products out of their own volition. This has been my experience over a long period with my personal projects as well. Users who come that way and stay are highly likely to tell their peers about the product, and that is how our userbase has grown into the millions—via word of mouth. Of course, a realistic and viable business model made the whole enterprise sustainable and profitable.

When the growth strategy is word-of-mouth recommendations from users, it naturally directs all focus in the organisation to building products and services that are not only useful to users, but are good enough for them to recommend to their peers.

No pushing and annoying

Financial transactions executed via our end-user software generate revenues for us. However, our software never push users into transacting. No promotional or marketing e-mails, SMSes, or notifications are sent. Push notifications are used very conservatively, and never to bring users back to our apps in order to “engage” them. There are no annoying “rate this app” or “tell your friends” popups. There are no abrupt or drastic visual changes in the UIs.

As a financial services company, we have a large team handling incoming phone calls from users with support queries. We have forgone the concept of outgoing calls entirely except for the rare but serious account related updates, or unless a user specifically asks for a call back. Nobody appreciates uninvited non-personal phone calls.

No “engagement” tracking

There is no tracking or profiling of user “engagement” across any of our apps. There are never any conversations about “engagement” at all. User behaviour is not mined and profiled. No actions are taken to “re-engage” users who may not be actively using our software. Users come back and use our software when, and most importantly, if, they want to come back.

No gimmicks

We do not offer ephemeral discounts or freebies to entice users into signing up on our platform that is meant for serious financial transactions. Signup is just a milestone in a meaningful user-product relationship. Choosing a financial services firm to manage one’s personal finances is a critical, long term decision. So, any ephemeral freebie that is designed to entice a user into signing up has no logical basis in our view. If a user signs up, it should be of their own volition to specifically use our software and services for the long term.

In the same vein, we put conscious effort into making sure that we do not end up drinking our own Kool-Aid and build software features that fit our idea of coolness and usefulness, but may be gimmicks in reality. Seeing popular software with massive design teams go through the Nth radical UI paradigm shift always serves as a reminder. See dont.build for a droll version of this approach.

Meaningful disengagement

Some examples of direct, but meaningful disengagement:

  • We charge users a fee, albeit nominal, to sign up. This friction causes users without a strong intent to do financial investments to drop off. Our thesis is that users who find utility in our products and hear of them from their peers will come and stay on of their own volition. The mere act of signing up does not create a user.

  • The Nudge system that is integrated across our investment apps actively discourages (and sometimes blocks) users from trades deemed risky, at the cost of foregoing the revenue we earn per trade. Logically, a user who makes a loss doing a risky financial investment will not remain a user much longer. Any attempt to push users into executing transactions that they did not intend to for short term revenue is not only meaningless and unethical, but is a sure shot way of losing business in the long term.

  • The Kill Switch feature allows users to instantly lock themselves out of being able to execute trades for a set period as a part of personal financial risk management. Turns out, a significant number of users find this extremely useful.

  • Instant account closure. If a user wants to close their account and stop using our software for whatever reason, they can do so online instantly, including completing all the required legal formalities. Those who have jumped through painful hoops to cancel online subscriptions can appreciate this. I certainly do.

Don’t do unto others …

That brings us to the ancient adage and the second tenet of the user disengagement philosophy, “don’t do unto others, what you don’t want done unto you.". Alas, if humans were biologically wired for this, the world would be a utopia right now.

A food delivery app that sends half a dozen notifications a day at odd hours pushing users to buy food, or a ride hailing app that randomly asks “want to go somewhere today?” (wut). These are probably backed by extensive data mining and A/B testing and may be driving “conversions”. The goal here is maximisation of certain numbers, not utility for the end user. I always wonder if the decision makers who employ these tactics themselves like being subjected to incessant notifications. Do they have notifications enabled on their own apps? Do the decision makers who decide to invade and monetize privacy like their own privacy being invaded? Do they like jumping through hoops to cancel a subscription? Do they like marketing popups and ads plastered all across their screens? I would like to think that the majority do not, and that it is either cognitive dissonance or hypocrisy, both fuelled by the relentless chasing of numbers. To my horror, I recently heard from a student at a premier business school in India that such “engagement” dark patterns are taught to them as product management strategies.

Thinking of it, one realises that it is a matter of ethics more than software. If the decision makers have the conscience and liberty to ask themselves the simple question, “would I want this for myself as a user?", a lot of user-hostility and garish, shambolic techno-product-business experiments that do not provide utility to users, would disappear. Of course, only when user-hostility itself is not the business model!

The over reliance on statistics for designing human interactions on software, like mindless “big data”, A/B UI experiments to optimise “conversions”, ironically diminish the human element in user interactions. There is a great need for common sense first principles of Human-computer Interaction (HCI) to take centre stage in the software design process. That the common sense approach to software design still exists in a few organisations, and that this spirit is still strong in the FOSS (Free and Open Source Software) world, is a consolation. If software is designed with means of meaningful disengagement rather than engagement, stuff would really be a whole lot more pleasant instead of being nasty.